Skip To: Latest News.

danger room

The Plot to Free North Korea With Reruns of Friends

This Wireless Explosives Detector Is the Size of a Postage Stamp

Why the US Government Is Terrified of Hobbyist Drones

The Rolling Rocket Bomb Designed to Kill Nazis Almost Killed a Dog Instead

The Flying Hospital That Rushes Wounded Soldiers to Safety

Indispensable Vehicles That Got Their Start in WWI

The Navy’s New Robot Looks and Swims Just Like a Shark

America’s Ugliest Warplane Is Going Back Into Battle

How the World’s First Computer Was Rescued From the Scrap Heap

#####EOF##### Artificial Intelligence Will Make Forging Anything Entirely Too Easy | WIRED
AI Will Make Forging Anything Entirely Too Easy

AI Will Make Forging Anything Entirely Too Easy

Gillian Blease/Getty Images

AI Will Make Forging Anything Entirely Too Easy

Gillian Blease/Getty Images

“Lordy, I hope there are tapes,” said an exasperated James Comey in his testimony before the Senate Intelligence Committee on June 8. Comey’s desire reflects a familiar one for individuals accused of lying when the stakes are high. The former FBI director wished for tapes because, in our society, audio and video recordings serve as a final arbiter of truth. He said, she said always loses to what the tape shows.

WIRED OPINION

ABOUT

Greg Allen (@Gregory_C_Allen) is an adjunct fellow at the Center for a New American Security. His study on AI and national security will be published this month through the Harvard Belfer Center.

Today, when people see a video of a politician taking a bribe, a soldier perpetrating a war crime, or a celebrity starring in a sex tape, viewers can safely assume that the depicted events have actually occurred, provided, of course, that the video is of a certain quality and not obviously edited.

But that world of truth—where seeing is believing—is about to be upended by artificial intelligence technologies.

We have grown comfortable with a future in which analytics, big data, and machine learning help us to monitor reality and discern the truth. Far less attention has been paid to how these technologies can also help us to lie. Audio and video forgery capabilities are making astounding progress, thanks to a boost from AI. In the future, realistic-looking and -sounding fakes will constantly confront people. Awash in audio, video, images, and documents, many real but some fake, people will struggle to know whom and what to trust.

Lyrebird, a deep learning tech startup based in Montreal, is developing technology that allows anyone to produce surprisingly realistic-sounding speech with the voice of any individual. Lyrebird’s demo generates speech, including varied intonation, in the voices of Donald Trump, Barack Obama, and Hillary Clinton. For now, the impersonations are impressive, but also possess a fuzzy, robotic quality that allows even an untrained ear to easily recognize the voice as computer-generated. Still, the technology is making rapid progress. Creative software giant Adobe is working on similar technology, announcing its goal of producing “Photoshop for audio.”

Researchers at Stanford and elsewhere have developed astonishing capabilities in video forgery. Using only an off-the-shelf webcam, their AI-based software allows an individual to realistically change the facial expression and speech-related mouth movements of an individual on YouTube video. Watch as one researcher edits a video of George W. Bush to insert new facial and speech expression, all in real time.

Other AI research groups have demonstrated the ability to run image recognition capabilities in reverse, allowing the generation of synthetic images based on text description alone. Jeff Clune, one of the researchers leading this work, told The Verge that “people send me real images and I start to wonder if they look fake. And when they send me fake images I assume they’re real because the quality is so good.”

Combined, the trajectory of cheap, high-quality media forgeries is worrying. At the current pace of progress, it may be as little as two or three years before realistic audio forgeries are good enough to fool the untrained ear, and only five or 10 years before forgeries can fool at least some types of forensic analysis. When tools for producing fake video perform at higher quality than today’s CGI and are simultaneously available to untrained amateurs, these forgeries might comprise a large part of the information ecosystem. The growth in this technology will transform the meaning of evidence and truth in domains across journalism, government communications, testimony in criminal justice, and, of course, national security.

The Russian intelligence service employs thousands of full-time workers who author fake news articles, social media posts, and comments on mainstream websites. These agents in turn control millions of botnet social media accounts that tweet about politics in order to shape national discourse. A study by the Computational Propaganda Research Project at the Oxford Internet Institute found that half of all Twitter accounts regularly commenting about politics in Russia were bots. And these operations don’t stop at the Russian border: In the US, Russian social media bots have already demonstrated an ability to drive mainstream media coverage of fake news and even influence American stock prices.

What happens when those agents and botnets are also armed with the ability to automatically generate and share not merely fake tweets of fake news but also fake HD video and audio? The technology industry and governments should not stand idly by to find out. The threats from the rise of this technology are multifaceted. So, too, must be the solutions.

Some will be technological in nature. Just as there are (admittedly imperfect) technological solutions that attempt to prevent image software like Photoshop from being used to counterfeit money, there may be technological solutions that can mitigate the worst impacts of AI-enabled forgery. Blockchain, the same technology used to secure cryptocurrencies such as Bitcoin, offers one possibility: It provides cryptographically secured evidence for the ordering of bitcoin transactions so that no one can spend the same cryptocurrency twice. It may be possible to design cameras and microphones that use blockchain technology to create an unimpeachable record of the date of creation of video recordings. While this would not prevent later editing or forged counterevidence, it would at least allow cryptographically secured evidence to show that a given file existed at a certain date, which could allow experts to infer that later versions may have been edited.

Other solutions will be regulatory and procedural. Police officers and prosecutors may have to develop standards of evidence for proving the chain of custody of a particular camera or microphone. An anonymously emailed video file may ultimately become as irrelevant as anonymously emailed witness testimony is today. And all manner of institutions may come to have a new appreciation for conversations held face to face around a table when phone calls and video chats may not only be digitally intercepted but also digitally impersonated.

Since the late 1800s, with the invention of the photograph and phonograph, society has had access to technology that can, with some important caveats, provide an answer in disputes about the truth. President Richard Nixon said he had no knowledge of the Watergate burglary coverup. The tapes proved that he lied. Unless government and business leaders seriously face this challenge, we will have to live in a society where there is no ultimate arbiter of truth. Perhaps in 10 years James Comey’s prayer will be answered and tapes will emerge of his conversations with Donald Trump. At that point, however, citizens and historians alike will have to wonder whether the tapes are real or yet another case of AI-enabled forgery.

Greg Allen (@Gregory_C_Allen) is an adjunct fellow at the Center for a New American Security. WIRED Opinion publishes pieces written by outside contributors and represents a wide range of viewpoints. Read more opinions here.

#####EOF##### This Wireless Explosives Detector Is the Size of a Postage Stamp | WIRED
This Wireless Explosives Detector Is the Size of a Postage Stamp

This Wireless Explosives Detector Is the Size of a Postage Stamp

A wireless, battery-free RFID sensor tag for detection of chemicals such as explosives and oxidizers.
GE Global Research

This Wireless Explosives Detector Is the Size of a Postage Stamp

A wireless, battery-free RFID sensor tag for detection of chemicals such as explosives and oxidizers.
GE Global Research

For public safety agencies, sniffing out explosives and other contraband is a tricky task. Handheld explosive detectors can be as small as a purse, but still must be manually operated. Permanently mounted sensors need to be even bigger. Dogs are useful in some scenarios, but they're expensive to deploy en masse and must always have a handler.

That's why GE Global Research is working on a new way to detect dangerous substances, one that costs about a nickel, can be deployed anywhere, and doesn't need human supervision. The device is a tiny RFID tag that activates only when it detects certain explosives or oxidizing agents. In effect, it could replace gigantic explosive scanners with something a couple inches across.

Developed in partnership with the Technical Support Working Group (TSWG), an inter-agency task force dedicated to anti-terrorism, the new RFID tag could dramatically drive down the cost of scanning for dangerous materials in places like cargo ports and airports.

Conventional RFID tags that have been converted into sensors by applying a sensing material on one side of the tag. The sensing material is white.

GE Global Research

RFID tags use electromagnetic fields to transfer data, and are commonly found on things like key cards to open doors and EZPass toll transponders. GE is keeping mum on the details of how they're being used here, but says it's developed "a sensing material that responds to explosives and oxidizers" than can be built into the device. Radislav Potyrailo, a GE scientist, compared the tags to a smoke alarm or CO2 sensor. "We have developed sensing materials that are quite sensitive for this type of detection."

The tags can be placed in cargo containers, shipping packages, airports, and government buildings, to name a few. The team believes they'll be able to sit dormant for months and still trigger effectively, without any need for power or recharging. Effectively, the tag can be slapped nearly anywhere and only activate once a target chemical is found. The range at which they can be read depends on the strength of the pickup antenna of the reader, typically anywhere from a few inches to a few dozen feet. That may seem limited, because GE believes can cost just pennies each, they can be installed in vast numbers very cheaply, basically everywhere.

Currently, GE's focus is on explosives and oxidizers (frequently used in improvised explosive devices), but the team believes it can develop similar tags to detect biological matter like spores or bacteria. Commercialization could arrive as soon as the next few years.

#####EOF##### An Unprecedented Look at Stuxnet, the World's First Digital Weapon | WIRED
An Unprecedented Look at Stuxnet, the World's First Digital Weapon

An Unprecedented Look at Stuxnet, the World's First Digital Weapon

This recent undated satellite image provided by Space Imaging/Inta SpaceTurk shows the once-secret Natanz nuclear complex in Natanz, Iran, about 150 miles south of Tehran.
AP Photo/Space Imaging/Inta SpaceTurk, HO

An Unprecedented Look at Stuxnet, the World's First Digital Weapon

This recent undated satellite image provided by Space Imaging/Inta SpaceTurk shows the once-secret Natanz nuclear complex in Natanz, Iran, about 150 miles south of Tehran.
AP Photo/Space Imaging/Inta SpaceTurk, HO
Countdown to Zero Day: Stuxnet and the Launch of the World’s First Digital Weapon

In January 2010, inspectors with the International Atomic Energy Agency visiting the Natanz uranium enrichment plant in Iran noticed that centrifuges used to enrich uranium gas were failing at an unprecedented rate. The cause was a complete mystery—apparently as much to the Iranian technicians replacing the centrifuges as to the inspectors observing them.

Five months later a seemingly unrelated event occurred. A computer security firm in Belarus was called in to troubleshoot a series of computers in Iran that were crashing and rebooting repeatedly. Again, the cause of the problem was a mystery. That is, until the researchers found a handful of malicious files on one of the systems and discovered the world's first digital weapon.

Stuxnet, as it came to be known, was unlike any other virus or worm that came before. Rather than simply hijacking targeted computers or stealing information from them, it escaped the digital realm to wreak physical destruction on equipment the computers controlled.

Countdown to Zero Day: Stuxnet and the Launch of the World's First Digital Weapon, written by WIRED senior staff writer Kim Zetter, tells the story behind Stuxnet's planning, execution and discovery. In this excerpt from the book, which will be released November 11, Stuxnet has already been at work silently sabotaging centrifuges at the Natanz plant for about a year. An early version of the attack weapon manipulated valves on the centrifuges to increase the pressure inside them and damage the devices as well as the enrichment process. Centrifuges are large cylindrical tubes—connected by pipes in a configuration known as a "cascade"—that spin at supersonic speed to separate isotopes in uranium gas for use in nuclear power plants and weapons. At the time of the attacks, each cascade at Natanz held 164 centrifuges. Uranium gas flows through the pipes into the centrifuges in a series of stages, becoming further "enriched" at each stage of the cascade as isotopes needed for a nuclear reaction are separated from other isotopes and become concentrated in the gas.

Countdown to Zero Day: Stuxnet and the Launch of the World’s First Digital Weapon

As the excerpt begins, it's June 2009—a year or so since Stuxnet was first released, but still a year before the covert operation will be discovered and exposed. As Iran prepares for its presidential elections, the attackers behind Stuxnet are also preparing their next assault on the enrichment plant with a new version of the malware. They unleash it just as the enrichment plant is beginning to recover from the effects of the previous attack. Their weapon this time is designed to manipulate computer systems made by the German firm Siemens that control and monitor the speed of the centrifuges. Because the computers are air-gapped from the internet, however, they cannot be reached directly by the remote attackers. So the attackers have designed their weapon to spread via infected USB flash drives. To get Stuxnet to its target machines, the attackers first infect computers belonging to five outside companies that are believed to be connected in some way to the nuclear program. The aim is to make each "patient zero" an unwitting carrier who will help spread and transport the weapon on flash drives into the protected facility and the Siemens computers. Although the five companies have been referenced in previous news reports, they've never been identified. Four of them are identified in this excerpt.

The Lead-Up to the 2009 Attack

The two weeks leading up to the release of the next attack were tumultuous ones in Iran. On June 12, 2009, the presidential elections between incumbent Mahmoud Ahmadinejad and challenger Mir-Hossein Mousavi didn’t turn out the way most expected. The race was supposed to be close, but when the results were announced—two hours after the polls closed—Ahmadinejad had won with 63 percent of the vote over Mousavi’s 34 percent. The electorate cried foul, and the next day crowds of angry protesters poured into the streets of Tehran to register their outrage and disbelief. According to media reports, it was the largest civil protest the country had seen since the 1979 revolution ousted the shah and it wasn’t long before it became violent. Protesters vandalized stores and set fire to trash bins, while police and Basijis, government-loyal militias in plainclothes, tried to disperse them with batons, electric prods, and bullets.

That Sunday, Ahmadinejad gave a defiant victory speech, declaring a new era for Iran and dismissing the protesters as nothing more than soccer hooligans soured by the loss of their team. The protests continued throughout the week, though, and on June 19, in an attempt to calm the crowds, the Ayatollah Ali Khamenei sanctioned the election results, insisting that the margin of victory—11 million votes—was too large to have been achieved through fraud. The crowds, however, were not assuaged.

The next day, a twenty-six-year-old woman named Neda Agha-Soltan got caught in a traffic jam caused by protesters and was shot in the chest by a sniper’s bullet after she and her music teacher stepped out of their car to observe.

Two days later on June 22, a Monday, the Guardian Council, which oversees elections in Iran, officially declared Ahmadinejad the winner, and after nearly two weeks of protests, Tehran became eerily quiet. Police had used tear gas and live ammunition to disperse the demonstrators, and most of them were now gone from the streets. That afternoon, at around 4:30 p.m. local time, as Iranians nursed their shock and grief over events of the previous days, a new version of Stuxnet was being compiled and unleashed.

Recovery From Previous Attack

While the streets of Tehran had been in turmoil, technicians at Natanz had been experiencing a period of relative calm. Around the first of the year, they had begun installing new centrifuges again, and by the end of February they had about 5,400 of them in place, close to the 6,000 that Ahmadinejad had promised the previous year. Not all of the centrifuges were enriching uranium yet, but at least there was forward movement again, and by June the number had jumped to 7,052, with 4,092 of these enriching gas. In addition to the eighteen cascades enriching gas in unit A24, there were now twelve cascades in A26 enriching gas. An additional seven cascades had even been installed in A28 and were under vacuum, being prepared to receive gas.

Iranian President Mahmoud Ahmadinejad during a tour of centrifuges at Natanz in 2008.

Office of the Presidency of the Islamic Republic of Iran

The performance of the centrifuges was improving too. Iran’s daily production of low-enriched uranium was up 20 percent and would remain consistent throughout the summer of 2009. Despite the previous problems, Iran had crossed a technical milestone and had succeeded in producing 839 kilograms of low-enriched uranium—enough to achieve nuclear-weapons breakout capability. If it continued at this rate, Iran would have enough enriched uranium to make two nuclear weapons within a year. This estimate, however, was based on the capacity of the IR-1 centrifuges currently installed at Natanz. But Iran had already installed IR-2 centrifuges in a small cascade in the pilot plant, and once testing on these was complete and technicians began installing them in the underground hall, the estimate would have to be revised. The more advanced IR-2 centrifuges were more efficient. It took 3,000 IR-1s to produce enough uranium for a nuclear weapon in one year, but it would take just 1,200 IR-2 centrifuges to do the same.

Cue Stuxnet 1.001, which showed up in late June.

The Next Assault

To get their weapon into the plant, the attackers launched an offensive against computers owned by four companies. All of the companies were involved in industrial control and processing of some sort, either manufacturing products and assembling components or installing industrial control systems. They were all likely chosen because they had some connection to Natanz as contractors and provided a gateway through which to pass Stuxnet to Natanz through infected employees.

To ensure greater success at getting the code where it needed to go, this version of Stuxnet had two more ways to spread than the previous one. Stuxnet 0.5 could spread only by infecting Step 7 project files—the files used to program Siemens PLCs. This version, however, could spread via USB flash drives using the Windows Autorun feature or through a victim’s local network using the print-spooler zero-day exploit that Kaspersky Lab, the antivirus firm based in Russia, and Symantec later found in the code.

Based on the log files in Stuxnet, a company called Foolad Technic was the first victim. It was infected at 4:40 a.m. on June 23, a Tuesday. But then it was almost a week before the next company was hit.

The following Monday, about five thousand marchers walked silently through the streets of Tehran to the Qoba Mosque to honor victims killed during the recent election protests. Late that evening, around 11:20 p.m., Stuxnet struck machines belonging to its second victim—a company called Behpajooh.

It was easy to see why Behpajooh was a target. It was an engineering firm based in Esfahan—the site of Iran’s new uranium conversion plant, built to turn milled uranium ore into gas for enriching at Natanz, and was also the location of Iran’s Nuclear Technology Center, which was believed to be the base for Iran’s nuclear weapons development program. Behpajooh had also been named in US federal court documents in connection with Iran’s illegal procurement activities.

Behpajooh was in the business of installing and programming industrial control and automation systems, including Siemens systems. The company’s website made no mention of Natanz, but it did mention that the company had installed Siemens S7-400 PLCs, as well as the Step 7 and WinCC software and Profibus communication modules at a steel plant in Esfahan. This was, of course, all of the same equipment Stuxnet targeted at Natanz.

At 5:00 a.m. on July 7, nine days after Behpajooh was hit, Stuxnet struck computers at Neda Industrial Group, as well as a company identified in the logs only as CGJ, believed to be Control Gostar Jahed. Both companies designed or installed industrial control systems.

Iranian President Mahmoud Ahmadinejad observes computer monitors at the Natanz uranium enrichment plant in central Iran, where Stuxnet was believed to have infected PCs and damaged centrifuges.

Office of the Presidency of the Islamic Republic of Iran

Neda designed and installed control systems, precision instrumentation, and electrical systems for the oil and gas industry in Iran, as well as for power plants and mining and process facilities. In 2000 and 2001 the company had installed Siemens S7 PLCs in several gas pipeline operations in Iran and had also installed Siemens S7 systems at the Esfahan Steel Complex. Like Behpajooh, Neda had been identified on a proliferation watch list for its alleged involvement in illicit procurement activity and was named in a US indictment for receiving smuggled microcontrollers and other components.

About two weeks after it struck Neda, a control engineer who worked for the company popped up on a Siemens user forum on July 22 complaining about a problem that workers at his company were having with their machines. The engineer, who posted a note under the user name Behrooz, indicated that all PCs at his company were having an identical problem with a Siemens Step 7 .DLL file that kept producing an error message. He suspected the problem was a virus that spread via flash drives.

When he used a DVD or CD to transfer files from an infected system to a clean one, everything was fine, he wrote. But when he used a flash drive to transfer files, the new PC started having the same problems the other machine had. A USB flash drive, of course, was Stuxnet’s primary method of spreading. Although Behrooz and his colleagues scanned for viruses, they found no malware on their machines. There was no sign in the discussion thread that they ever resolved the problem at the time.

It's not clear how long it took Stuxnet to reach its target after infecting machines at Neda and the other companies, but between June and August the number of centrifuges enriching uranium gas at Natanz began to drop. Whether this was the result solely of the new version of Stuxnet or the lingering effects of the previous version is unknown. But by August that year, only 4,592 centrifuges were enriching at the plant, a decrease of 328 centrifuges since June. By November, that number had dropped even further to 3,936, a difference of 984 in five months. What's more, although new machines were still being installed, none of them were being fed gas.

Clearly there were problems with the cascades, and technicians had no idea what they were. The changes mapped precisely, however, to what Stuxnet was designed to do.

Reprinted fromCountdown to Zero Day: Stuxnet and the Launch of the World’s First Digital Weapon* Copyright © 2014 by Kim Zetter. Published by Crown Publishers, an imprint of Random House LLC.*

#####EOF##### Critical "Meltdown" and "Spectre" Flaws Break Basic Security for Intel, AMD, ARM Computers | WIRED
A Critical Intel Flaw Breaks Basic Security for Most Computers

A Critical Intel Flaw Breaks Basic Security for Most Computers

Joan Cros/NurPhoto/Getty Images

A Critical Intel Flaw Breaks Basic Security for Most Computers

Joan Cros/NurPhoto/Getty Images

One of the most basic premises of computer security is isolation: If you run somebody else's sketchy code as an untrusted process on your machine, you should restrict it to its own tightly sealed playpen. Otherwise, it might peer into other processes, or snoop around the computer as a whole. So when a security flaw in computers' most deep-seated hardware puts a crack in those walls, as one newly discovered vulnerability in millions of processors has done, it breaks some of the most fundamental protections computers promise—and sends practically the entire industry scrambling.

Earlier this week, security researchers took note of a series of changes Linux and Windows developers began rolling out in beta updates to address a critical security flaw: A bug in Intel chips allows low-privilege processes to access memory in the computer's kernel, the machine's most privileged inner sanctum. Theoretical attacks that exploit that bug, based on quirks in features Intel has implemented for faster processing, could allow malicious software to spy deeply into other processes and data on the target computer or smartphone. And on multi-user machines, like the servers run by Google Cloud Services or Amazon Web Services, they could even allow hackers to break out of one user's process, and instead snoop on other processes running on the same shared server.

On Wednesday evening, a large team of researchers at Google's Project Zero, universities including the Graz University of Technology, the University of Pennsylvania, the University of Adelaide in Australia, and security companies including Cyberus and Rambus together released the full details of two attacks based on that flaw, which they call Meltdown and Spectre.

"These hardware bugs allow programs to steal data which [is] currently processed on the computer," reads a description of the attacks on a website the researchers created. "While programs are typically not permitted to read data from other programs, a malicious program can exploit Meltdown and Spectre to get hold of secrets stored in the memory of other running programs."

Although both attacks are based on the same general principle, Meltdown allows malicious programs to gain access to higher-privileged parts of a computer's memory, while Spectre steals data from the memory of other applications running on a machine. And while the researchers say that Meltdown is limited to Intel chips, they say that they've verified Spectre attacks on AMD and ARM processors, as well.

Ben Gras, a security researcher with Vrije Universiteit Amsterdam who specializes in chip-level hardware security, says that the attacks represent a deep and serious security breach. "With these glitches, if there's any way an attacker can execute code on a machine, it can’t be contained anymore," he says. (Gras was clear that he hadn't participated in any research that unearthed or reproduced the vulnerability, but he has watched the revelations of Intel's vulnerability unfold in the security community.) "For any process that’s untrusted and isolated, that safety is gone now," Gras adds. "Every process can spy on every other process and access secrets in the operating system kernel."

Meltdown and Spectre

Prior to the official revelation of Meltdown and Spectre on Wednesday, Erik Bosman, a colleague of Gras in Vrije Universiteit Amsterdam's VUSEC security group, successfully reproduced one of the Intel attacks, which take advantage of a feature in chips known as "speculative execution." When modern Intel processors execute code and come to a point in an algorithm where instructions branch in two different directions, depending on input data—whether there's enough money in an account to process a transaction, for instance—they save time by "speculatively" venturing down those forks. In other words, they take a guess, and execute instructions to get a head start. If the processor learns that it ventured down the wrong path, it jumps back to the fork in the road, and throws out the speculative work.

VUSEC's Bosman confirmed that when Intel processors perform that speculative execution, they don't fully segregate processes that are meant to be low-privilege and untrusted from the highest-privilege memory in the computer's kernel. That means a hacker can trick the processor into allowing unprivileged code to peek into the kernel's memory with speculative execution.

"The processor basically runs too far ahead, executing instructions that it should not execute," says Daniel Gruss, one of the researchers from the Graz University of Technology who discovered the attacks.

Retrieving any data from that privileged peeking isn't simple, since once the processor stops its speculative execution and jumps back to the fork in its instructions, it throws out the results. But before it does, it stores them in its cache, a collection of temporary memory allotted to the processor to give it quick access to recent data. By carefully crafting requests to the processor and seeing how fast it responds, a hacker's code could figure out whether the requested data is in the cache or not. And with a series of speculative execution and cache probes, he or she can start to assemble parts of the computer's high privilege memory, including even sensitive personal information or passwords.

Many security researchers who spotted signs of developers working to fix that bug had speculated that the Intel flaw merely allowed hackers to defeat a security protection known as Kernel Address Space Layout Randomization, which makes it far more difficult for hackers to find the location of the kernel in memory before they use other tricks to attack it. But Bosman confirms theories that the bug is more serious: It allows malicious code to not only locate the kernel in memory, but steal that memory's contents, too.

"Out of the two things that were speculated, this is the worst outcome," Bosman says.

A Tough Fix

In a statement responding to the Meltdown and Spectre research, Intel noted that "these exploits do not have the potential to corrupt, modify, or delete data," though they do have the ability to spy on privileged data. The statement also argued that "many types of computing devices—with many different vendors’ processors and operating systems—are susceptible to these exploits," mentioning ARM and AMD processors as well.

"I can confirm that Arm have been working together with Intel and AMD to address a side-channel analysis method which exploits speculative execution techniques used in certain high-end processors, including some of our Cortex-A processors," says ARM public relations director Phil Hughes. "This method requires malware running locally and could result in data being accessed from privileged memory." Hughes notes that ARM's IoT-focused Cortex-M line is unaffected.

In an email to WIRED, AMD noted that the research was performed in a "controlled, dedicated lab environment," and that because of its processor architecture the company believes that "there is near zero risk to AMD products at this time."

Microsoft, which relies heavily on Intel processors in its computers, says that it has updates forthcoming to address the problem. "We’re aware of this industry-wide issue and have been working closely with chip manufacturers to develop and test mitigations to protect our customers," the company said in a statement. "We are in the process of deploying mitigations to cloud services and are releasing security updates today to protect Windows customers against vulnerabilities affecting supported hardware chips from AMD, ARM, and Intel. We have not received any information to indicate that these vulnerabilities had been used to attack our customers."

Linux developers have already released a fix, apparently based on a paper recommending deep changes to operating systems known as KAISER, released earlier this year by researchers at the Graz University of Technology.

Apple released a statement Thursday confirming that "all Mac systems and iOS devices are affected," though the Apple Watch is not. "Apple has already released mitigations in iOS 11.2, macOS 10.13.2, and tvOS 11.2 to help defend against Meltdown," the company said. "In the coming days we plan to release mitigations in Safari to help defend against Spectre. We continue to develop and test further mitigations for these issues and will release them in upcoming updates of iOS, macOS, tvOS, and watchOS."

Amazon, which offers cloud services on shared server setups, says that it will take steps to resolve the issue soon as well. "This is a vulnerability that has existed for more than 20 years in modern processor architectures like Intel, AMD, and ARM across servers, desktops, and mobile devices," the company said in a statement. "All but a small single-digit percentage of instances across the Amazon EC2 fleet are already protected. The remaining ones will be completed in the next several hours."

Google, which offers similar cloud services, pointed WIRED to a chart of Meltdown and Spectre's effects on its services, which states that the security issue has been resolved in all of the company's infrastructure.

'Out of the two things that were speculated, this is the worst outcome.'

Erik Bosman, VUSEC

Those operating system patches that fix the Intel flaw may also come at a cost: Better isolating the kernel memory from unprivileged memory could create a significant slowdowns for certain processes. According to an analysis by the Register, which was also the first to report on the Intel flaw, those delays could be as much as 30 percent in some cases, although some processes and newer processors are likely to experience less significant slowdowns. Intel, for its part, wrote in its statement that "performance impacts are workload-dependent, and, for the average computer user, should not be significant and will be mitigated over time."

Until the patches for Meltdown and Spectre roll out more widely, it's not clear just what the speed cost of neutering those attacks may turn out to be. But even if the updates result in a performance hit, it may be a worthwhile safeguard: Better to put the brakes on your processor, perhaps, than allow it to spill your computer's most sensitive secrets.

This story has been updated to include comment by Intel, Microsoft, Amazon, Google, AM, ARM, and Apple, as well as full research details from Google's Project Zero et al.

#####EOF##### What is GDPR? The summary guide to GDPR compliance in the UK | WIRED UK

What is GDPR? The summary guide to GDPR compliance in the UK

General Data Protection Regulation, or GDPR, have overhauled how businesses process and handle data. Our need-to-know GDPR guide explains what the changes mean for you


21 Jan 2019
iStock / art-sonik

Europe is now covered by the world's strongest data protection rules. The mutually agreed General Data Protection Regulation (GDPR) came into force on May 25, 2018, and was designed to modernise laws that protect the personal information of individuals.

Before GDPR started to be enforced, the previous data protection rules across Europe were first created during the 1990s and had struggled to keep pace with rapid technological changes. GDPR alters how businesses and public sector organisations can handle the information of their customers. It also boosts the rights of individuals and gives them more control over their information.

Elizabeth Denham, the UK's information commissioner, who is in charge of data protection enforcement, says GDPR brings in big changes but has warned they don't change everything. "The GDPR is a step change for data protection," she says. "It's still an evolution, not a revolution". For businesses which were already complying with pre-GDPR rules the new should be a "step change," Denham says.

But there has been plenty of confusion around GDPR. To help clear things up, here's WIRED's guide to GDPR.

What is GDPR exactly?

The GDPR is Europe's new framework for data protection laws – it replaces the previous 1995 data protection directive. Previous UK law was based upon this directive.

The EU's GDPR website says the legislation is designed to "harmonise" data privacy laws across Europe as well as give greater protection and rights to individuals. Within the GDPR there are large changes for the public as well as businesses and bodies that handle personal information, which we'll explain in more detail later.

After more than four years of discussion and negotiation, GDPR was adopted by both the European Parliament and the European Council in April 2016. The underpinning regulation and directive were published at the end of that month.

After publication of GDPR in the EU Official Journal in May 2016, it will come into force on May 25, 2018. The two year preparation period has given businesses and public bodies covered by the regulation to prepare for the changes.

GDPR Summary


When does the new regulation start?
May 25, 2018
Who will enforce it in the UK?
The Information Commissioner's Office
What's new?
There are new rights for people to access the information companies hold about them, obligations for better data management for businesses, and a new regime of fines
Does Brexit matter?
The UK has implemented a new Data Protection Act which largely includes all the provisions of the GDPR. There are some small changes but our law is largely the same

What did GDPR replace?

GDPR applies across the entirety of Europe but each individual country has the ability to make its own small changes. In the UK, the government has created a new Data Protection Act (2018) which replaces the 1998 Data Protection Act.

The new UK Data Protection Act was passed just before GDPR came into force, after spending several months in draft formats and passing its way through the House of Commons and House of Lords. The Data Protection Act 2018 can be found here.

As the law was passed there were some controversies. It was amended to protect cybersecurity researchers who work to uncover abuses of personal data, after critics said the law could see their research be criminalised. Politicians also attempted to say there should be a second Leveson inquiry into press standards in the UK but this was dropped at the last minute.

Is my company/startup/charity going to be impacted?

In short, yes. Individuals, organisations, and companies that are either 'controllers' or 'processors' of personal data will be covered by the GDPR. "If you are currently subject to the DPA, it is likely that you will also be subject to the GDPR," the ICO says on its website.

Both personal data and sensitive personal data are covered by GDPR. Personal data, a complex category of information, broadly means a piece of information that can be used to identify a person. This can be a name, address, IP address... you name it. Sensitive personal data encompasses genetic data, information about religious and political views, sexual orientation, and more.

The definitions are largely the same as those that were previously included in data protection laws. Where GDPR differentiates from current data protection laws is that pseudonymised personal data can fall under the law – if it's possible that a person could be identified by a pseudonym.

So, what's different?

In the full text of GDPR there are 99 articles setting out the rights of individuals and obligations placed on organisations covered by the regulation.

There are eight rights for individuals. These include allowing people to have easier access to the data companies hold about them, a new fines regime and a clear responsibility for organisations to obtain the consent of people they collect information about.

Helen Dixon, the data protection commissioner for Ireland, who has major technology company offices under her jurisdiction, says the new regulation was needed and is a positive move. In the build-up to GDPR, she said startups need to have more awareness of the rules.

"One of the issues with startups is that when they're going through all the formalities new businesses go through, there's no data protection hook at that stage," Dixon said.

Who is in charge of GDPR in the UK?


Government
The Department for Culture, Media and Sport is the government arm responsible for ensuring that UK law complies with the requirements of GDPR. The government body was also responsible for creating the UK's Data Protection Act but won't have control of the day-to-day elements of GDPR once it is enforced.
The regulator
The Information Commissioner's Office (ICO) will be responsible for enforcing GDPR. The ICO has the power to conduct criminal investigations and issue fines. It is also providing organisations with huge amounts of guidance about how to comply with GDPR.

Accountability and compliance

Companies covered by the GDPR are accountable for their handling of people's personal information. This can include having data protection policies, data protection impact assessments and having relevant documents on how data is processed.

In recent years, there have been a score of massive data breaches, including millions of Yahoo, LinkedIn, and MySpace account details. Under GDPR, the "destruction, loss, alteration, unauthorised disclosure of, or access to" people's data has to be reported to a country's data protection regulator where it could have a detrimental impact on those who it is about. This can include, but isn't limited to, financial loss, confidentiality breaches, damage to reputation and more. The ICO has to be told about a breach 72 hours after an organisation finds out about it and the people it impacts also need to be told.

For companies that have more than 250 employees, there's a need to have documentation of why people's information is being collected and processed, descriptions of the information that's held, how long it's being kept for and descriptions of technical security measures in place.

Additionally, companies that have "regular and systematic monitoring" of individuals at a large scale or process a lot of sensitive personal data have to employ a data protection officer (DPO). For many organisations covered by GDPR, this may mean having to hire a new member of staff – although larger businesses and public authorities may already have people in this role. In this job, the person has to report to senior members of staff, monitor compliance with GDPR and be a point of contact for employees and customers. "It means the data protection will be a boardroom issue in a way it hasn't in the past combined," Denham says.

There's also a requirement for businesses to obtain consent to process data in some situations. When an organisation is relying on consent to lawfully use a person's information they have to clearly explain that consent is being given and there has to be a "positive opt-in". A blog post from Denham explains there are multiple ways for organisations to process people's data that doesn't rely upon consent.

Access to your data

As well putting new obligations on the companies and organisations collecting personal data, the GDPR also gives individuals a lot more power to access the information that's held about them.

A Subject Access Request (SAR) allows an individual the ability to ask a company or organisation to provide data about them. Previously, these requests cost £10 but GDPR scraps the cost and makes it free to ask for your information. When someone makes a SAR businesses must stump up the information within one month. Everyone will have the right to get confirmation that an organisation has information about them, access to this information and any other supplementary information. As Dixon points out, big technology companies, as well as smaller startups, will have to give users more control over their data.

As well as this the GDPR bolsters a person's rights around automated processing of data. The ICO says individuals "have the right not to be subject to a decision" if it is automatic and it produces a significant effect on a person. There are certain exceptions but generally people must be provided with an explanation of a decision made about them.

The regulation also gives individuals the power to get their personal data erased in some circumstances. This includes where it is no longer necessary for the purpose it was collected, if consent is withdrawn, there's no legitimate interest, and if it was unlawfully processed.

GDPR fines

One of the biggest, and most talked about, elements of the GDPR has been the ability for regulators to fine businesses that don't comply with it. If an organisation doesn't process an individual's data in the correct way, it can be fined. If it requires and doesn't have a data protection officer, it can be fined. If there's a security breach, it can be fined.

In the UK, these monetary penalties will be decided upon by Denham's office and the GDPR states smaller offences could result in fines of up to €10 million or two per cent of a firm's global turnover (whichever is greater). Those with more serious consequences can have fines of up to €20 million or four per cent of a firm's global turnover (whichever is greater). These are larger than the £500,000 penalty the ICO could previously issue.

Denham says speculation that her office will try to make examples of companies by issuing large business-crippling fines isn't correct. "We will have the possibility of using larger fines when we are unsuccessful in getting compliance in other ways," she says. "But we've always preferred the carrot to the stick".

Denham says there is "no intention" for overhauling how her office hands out fines and regulates data protection across the UK. She adds that the ICO prefers to work with organisations to improve their practices and sometimes a "stern letter" can be enough for this to happen.

"Having larger fines is useful but I think fundamentally what I'm saying is it's scaremongering to suggest that we're going to be making early examples of organisations that breach the law or that fining a top whack is going to become the norm." She adds that her office will be more lenient on companies that have shown awareness of the GDPR and tried to implement it, when compared to those that haven't made any effort.

What does Brexit mean for GDPR?

The UK’s 2018 Data Protection Act is an almost identical copy of GDPR for a reason: when the UK leaves the EU, there won’t be a huge shift in the law. After the UK leaves, GDPR will still protect the rights of EU citizens with businesses and organisations not having to change their policies.

But there could be changes for organisations that move data between the European Economic Area and the UK. This depends on what deal the UK leaves with. Because the UK will not, technically, be part of GDPR it doesn’t have any assurances that data will be protected. As such, data adequacy becomes important.

At present, the UK government has said it will seek adequacy agreements with the EU to clarify that its data protection system is essentially the same as GDPR. Once agreed, this would mean that data could easily flow between the EEA and the UK. The ICO has [link url=“https://ico.org.uk/about-the-ico/news-and-events/blog-data-protection-and-brexit-ico-advice-for-organisations/“]produced guidance[/link] around this.

What is personal data?


The key terms
GDPR and other data protection laws rely on the term 'personal data' to discuss information about individuals. There are two key types of personal data in the UK and they cover different categories of information.
What is personal data?
Personal data can be anything that allows a living person to be directly or indirectly identified. This may be a name, an address, or even an IP address. It includes automated personal data and can also encompass pseudonymised data if a person can be identified from it.
So, what's sensitive personal data?
GDPR calls sensitive personal data as being in 'special categories' of information. These include trade union membership, religious beliefs, political opinions, racial information, and sexual orientation.

What should we do to comply?

The enforcement date for GDPR may have already passed but data protection is an evolving beast. It will never be completely possible for businesses to be fully "GDPR compliant".

Keeping on top of data can be a tricky thing – especially when businesses are evolving the services that are offered to customers. The ICO's guide to GDPR sets out all of the different rights and principles of GDPR.

It also has a starter guide, which is available here, that includes advice on steps such as making senior business leaders aware of the regulation, determining which info is held, updating procedures around subject access requests, and what should happen in the event of a data breach. In Ireland, the regulator has also setup a separate website explaining what should change within companies.

What if we don't comply from day one?

Businesses and organisations impacted by GDPR have had two years to get their systems ready. But things don't always go to plan. It's likely that many firms were not ready for GDPR. The UK information commissioner has stated she won't be looking to make examples of companies by issuing large fines when they're not deserved.

The ICO largely takes a collaborative approach to enforcement. Denham has said her office will look to engage with companies rather than issue them with punishments straight away. Companies who have shown awareness and taken steps to comply with GDPR are likely to be treated better than those who haven't done any work around it.

Looking for more?

We don't claim to have all the answers. In between a lot of GDPR hype there has also been some incredibly useful resources that have been published on the regulation. Here's where to go if you're looking for more in-depth reading:

– The full regulation. It's 88 pages long and has 99 articles.

– The ICO's guide to GDPR is essential for both consumers and those working within businesses.

– EU GDPR is full with information on the regulation. It details all you need to know and has a handy countdown clock for when GDPR will come into force.

– The EU's Article 29 data protection group is publishing guidelines on data breach notifications, transparency, and subject access requests.

#####EOF##### Secret Code Found in Juniper's Firewalls Shows Risk of Government Backdoors | WIRED
Secret Code Found in Juniper's Firewalls Shows Risk of Government Backdoors

Secret Code Found in Juniper's Firewalls Shows Risk of Government Backdoors

Man lifting empty spotlit circle and entering hole below
Getty Images

Secret Code Found in Juniper's Firewalls Shows Risk of Government Backdoors

Man lifting empty spotlit circle and entering hole below
Getty Images

Encryption backdoors have been a hot topic in the last few years—and the controversial issue got even hotter after the terrorist attacks in Paris and San Bernardino, when it dominated media headlines. It even came up during this week's Republican presidential candidate debate. But despite all the attention focused on backdoors lately, no one noticed that someone had quietly installed backdoors three years ago in a core piece of networking equipment used to protect corporate and government systems around the world.

On Thursday, tech giant Juniper Networks revealed in a startling announcement that it had found "unauthorized" code embedded in an operating system running on some of its firewalls.

The code, which appears to have been in multiple versions of the company's ScreenOS software going back to at least August 2012, would have allowed attackers to take complete control of Juniper NetScreen firewalls running the affected software. It also would allow attackers, if they had ample resources and skills, to separately decrypt encrypted traffic running through the Virtual Private Network, or VPN, on the firewalls.

"During a recent internal code review, Juniper discovered unauthorized code in ScreenOS that could allow a knowledgeable attacker to gain administrative access to NetScreen devices and to decrypt VPN connections," Bob Worrall, the companies' CIO wrote in a post. "Once we identified these vulnerabilities, we launched an investigation into the matter, and worked to develop and issue patched releases for the latest versions of ScreenOS."

'This is a very good showcase for why backdoors are really something governments should not have in these types of devices because at some point it will backfire.'

Juniper released patches for the software yesterday and advised customers to install them immediately, noting that firewalls using ScreenOS 6.2.0r15 through 6.2.0r18 and 6.3.0r12 through 6.3.0r20 are vulnerable. Release notes for 6.2.0r15 show that version being released in September 2012, while release notes for 6.3.0r12 show that the latter version was issued in August 2012.

The security community is particularly alarmed because at least one of the backdoors appears to be the work of a sophisticated nation-state attacker.

"The weakness in the VPN itself that enables passive decryption is only of benefit to a national surveillance agency like the British, the US, the Chinese, or the Israelis," says Nicholas Weaver, a researcher at the International Computer Science Institute and UC Berkeley. "You need to have wiretaps on the internet for that to be a valuable change to make [in the software]."

But the backdoors are also a concern because one of them—a hardcoded master password left behind in Juniper's software by the attackers—will now allow anyone else to take command of Juniper firewalls that administrators have not yet patched, once the attackers have figured out the password by examining Juniper's code.

Ronald Prins, founder and CTO of Fox-IT, a Dutch security firm, said the patch released by Juniper provides hints about where the master password backdoor is located in the software. By reverse-engineering the firmware on a Juniper firewall, analysts at his company found the password in just six hours.

"Once you know there is a backdoor there, ... the patch [Juniper released] gives away where to look for [the backdoor] … which you can use to log into every [Juniper] device using the Screen OS software," he told WIRED. "We are now capable of logging into all vulnerable firewalls in the same way as the actors [who installed the backdoor]."

But there is another concern raised by Juniper's announcement and patches—any other nation-state attackers, in addition to the culprits who installed the backdoors, who have intercepted and stored encrypted VPN traffic running through Juniper's firewalls in the past, may now be able to decrypt it, Prins says, by analyzing Juniper's patches and figuring out how the initial attackers were using the backdoor to decrypt it.

"If other state actors are intercepting VPN traffic from those VPN devices, … they will be able to go back in history and be able to decrypt this kind of traffic," he says.

Weaver says this depends on the exact nature of the VPN backdoor. "If it was something like the Dual EC, the backdoor doesn't actually get you in, ... you also need to know the secret. But if it's something like creating a weak key, then anybody who has captured all traffic can decrypt." Dual EC is a reference to an encryption algorithm that the NSA is believed to have backdoored in the past to make it weaker. This factor, along with knowledge of a secret key, would allow the agency to undermine the algorithm.

Matt Blaze, a cryptographic researcher and professor at the University of Pennsylvania, agrees that the ability to decrypt already-collected Juniper VPN traffic depends on certain factors, but cites a different reason.

"If the VPN backdoor doesn't require you to use the other remote-access [password] backdoor first," then it would be possible to decrypt historical traffic that had been captured, he says. "But I can imagine designing a backdoor in which I have to log into the box using the remote-access backdoor in order to enable the backdoor that lets me decrypt intercepted traffic."

A page on Juniper's web site does appear to show that it's using the weak Dual EC algorithm in some products, though Matthew Green, a cryptography professor at Johns Hopkins University, says it's still unclear if this is the source of the VPN issue in Juniper's firewalls.

Juniper released two announcements about the problem on Thursday. In a second more technical advisory, the company described two sets of unauthorized code in the software, which created two backdoors that worked independently of one another, suggesting the password backdoor and the VPN backdoor aren't connected. A Juniper spokeswoman refused to answer questions beyond what was already said in the released statements.

Regardless of the precise nature of the VPN backdoor, the issues raised by this latest incident highlight precisely why security experts and companies like Apple and Google have been arguing against installing encryption backdoors in devices and software to give the US government access to protected communication.

"This is a very good showcase for why backdoors are really something governments should not have in these types of devices because at some point it will backfire," Prins says.

Green says the hypothetical threat around NSA backdoors has always been: What if someone repurposed them against us? If Juniper did use Dual EC, an algorithm long-known to be vulnerable, and this is part of the backdoor in question, it underscores that threat of repurposing by other actors even more.

"The use of Dual EC in ScreenOS ... should make us at least consider the possibility that this may have happened," he told WIRED.

Two Backdoors

The first backdoor Juniper found would give an attacker administrative-level or root privileges over the firewalls—essentially the highest-level of access on a system—when accessing the firewalls remotely via SSH or telnet channels. "Exploitation of this vulnerability can lead to complete compromise of the affected system," Juniper noted.

Although the firewall's log files would show a suspicious entry for someone gaining access over SSH or Telnet, the log would only provide a cryptic message that it was the "system" that had logged on successfully with a password. And Juniper noted that a skilled attacker would likely remove even this cryptic entry from log files to further eliminate any indication that the device had been compromised.

Juniper

The second backdoor would effectively allow an attacker who has already intercepted VPN traffic passing through the Juniper firewalls to decrypt the traffic without knowing the decryption keys. Juniper said that it had no evidence that this vulnerability had been exploited, but also noted that, "There is no way to detect that this vulnerability was exploited."

Juniper is the second largest maker of networking equipment after Cisco. The Juniper firewalls in question have two functions. The first is to ensure that the right connections have access to a company or government agency's network; the other is to provide secured VPN access to remote workers or others with authorized access to the network. The ScreenOS software running on Juniper firewalls was initially designed by NetScreen, a company that Juniper acquired in 2004. But the versions affected by the backdoors were released under Juniper's watch, eight years after that acquisition.

The company said it discovered the backdoors during an internal code review, but it didn't say if this was a routine review or if it had examined the code specifically after receiving a tip that something suspicious was in it.

Speculation in the security community about who might have installed the unauthorized code centers on the NSA, though it could have been another nation-state actor with similar capabilities, such as the UK, China, Russia, or even Israel.

Prins thinks both backdoors were installed by the same actor, but also notes that the hardcoded master password giving the attackers remote access to the firewalls was too easy to find once they knew it was there. He expects the NSA would not have been so sloppy.
Weaver says it's possible there were two culprits. "It could very well be that the crypto backdoor was [done by] the NSA but the remote-access backdoor was the Chinese or the French or the Israelis or anybody," he told WIRED.

NSA documents released to media in the past show that the agency has put a lot of effort into compromising Juniper firewalls and those made by other companies.

An NSA spy tool catalogue leaked to Der Spiegel in 2013 described a sophisticated NSA implant known as FEEDTROUGH that was designed to maintain a persistent backdoor in Juniper firewalls. FEEDTROUGH, Der Spiegel wrote, "burrows into Juniper firewalls and makes it possible to smuggle other NSA programs into mainframe computers….." It's also designed to remain on systems even after they're rebooted or the operating system on them is upgraded. According to the NSA documents, FEEDTROUGH had "been deployed on many target platforms."

FEEDTROUGH, however, appears to be something different than the unauthorized code Juniper describes in its advisories. FEEDTROUGH is a firmware implant—a kind of "aftermarket" spy tool installed on specific targeted devices in the field or before they're delivered to customers. The unauthorized code Juniper found in its software was embedded in the operating system itself and would have infected every customer who purchased products containing the compromised versions of the software.

Naturally, some in the community have questioned whether these were backdoors that Juniper had voluntarily installed for a specific government and decided to disclose only after it became apparent that the backdoor had been discovered by others. But Juniper was quick to dispel those allegations. "Juniper Networks takes allegations of this nature very seriously," the company said in a statement. "To be clear, we do not work with governments or anyone else to purposely introduce weaknesses or vulnerabilities into our products… Once this code was discovered we worked to produce a fix and notify customers of the issues."

Prins says the larger concern now is whether other firewall manufacturers have been compromised in a similar manner. "I hope that other vendors like Cisco and Checkpoint are also now starting a process to review their code to see if they have backdoors inserted," he said.

#####EOF##### Segway MiniPro Vulnerabilities Would Have Let Hackers Take Over the Hoverboard | WIRED
Watch Hackers Take Over a Segway With Someone On It

Watch Hackers Take Over a Segway With Someone On It

Watch Hackers Take Over a Segway With Someone On It

When you imagine riding a Segway MiniPro electric scooter, your biggest concern is probably falling on your face. Much lower on that list? The notion that attackers could remotely hack your ride, make it stop short, or even drive you into traffic. Unfortunately, as one reacher found, they could have done just that.

When Thomas Kilbride got a Segway MiniPro, its paired mobile app piqued his interest; by day, Kilbride works as an embedded device security consultant at IOActive. The app already has fairly potent capabilities as designed. You can use it to remote control your scooter or shut it off when no one's on it, and you could even use its social GPS tracking feature to show all Segway MiniPros in an area in real-time. But when Kilbride investigated the security behind those features, he found vulnerabilities that an attacker could exploit to bypass the hoverboard's safety protections from afar, and take control of the device.

"I own a hoverboard, I use it quite frequently because parking is expensive," Kilbride says. "I was surprised that the exploits were as accessible as they were. Something like a transportation device should be handled with the utmost care and security, because somebody could be thrown off of it or seriously injured if an attacker decides that they want to [hack] it."

Easy Access

The Segway MiniPro app uses Bluetooth to connect to the vehicle itself. In addition to the features mentioned above, it can also change device settings and accept firmware updates to the scooter for tweaks and improvements. Think of it like a smart lighting app that talks to the bulbs.

While analyzing the communication between the app and the Segway scooter itself, Kilbride noticed that a user PIN number meant to protect the Bluetooth communication from unauthorized access wasn't being used for authentication at every level of the system. As a result, Kilbride could send arbitrary commands to the scooter without needing the user-chosen PIN.

He also discovered that the hoverboard's software update platform didn't have a mechanism in place to confirm that firmware updates sent to the device were really from Segway (often called an "integrity check"). This meant that in addition to sending the scooter commands, an attacker could easily trick the device into installing a malicious firmware update that could override its fundamental programming. In this way an attacker would be able to nullify built-in safety mechanisms that prevented the app from remote-controlling or shutting off the vehicle while someone was on it.

"The app allows you to do things like change LED colors, it allows you to remote-control the hoverboard and also apply firmware updates, which is the interesting part," Kilbride says. "Under the right circumstances, if somebody applies a malicious firmware update, any attacker who knows the right assembly language could then leverage this to basically do as they wish with the hoverboard."

As if that weren't enough, the Segway MiniPro app also provided one other tool to unintentionally aid in malicious activity. The GPS feature known as "Rider Nearby" acted as a sort of social platform for finding other MiniPro owners, but it's easy to see how publicly available, persistent location tracking could be abused. As part of addressing Kilbride's findings Segway discontinued the feature.

Regaining Balance

The good news is that IOActive disclosed the bugs to Segway, which is owned by Chinese scooter-maker Ninebot, in January, and the company addressed the bulk of the problems in an app update in April. As part of the changes, Segway added mechanisms like cryptographic signing to validate firmware updates, which should prevent full takeovers. It eliminated the Rider Nearby feature, and took steps to evaluate its Bluetooth communication protocols and security. Segway has not yet returned a request from WIRED for comment. Kilbride says the company was responsive to his disclosures, but notes that some weaknesses may still exist in the way users can access the device's Bluetooth management interface. The severe attacks attacks Kilbride executed during his research aren't possible anymore, though.

Although patched, the extensive exposure in a digitally connected vehicle still reinforces the very real dangers of device hacking. IoT vulnerabilities have already led to real-world harm in many incidents, and "smart" transportation has long posed clear physical safety risks if left unsecured. For Segway, pairing an internet-connected device with a Bluetooth-enabled vehicle created exposures that a standalone scooter without digital connectivity would have avoided.

In terms of existential dread, you can find some reprieve in knowing that most hackers are seeking profit, and there isn’t a lot of money to be made in maiming Segway riders. But stealing Segways, which someone could have done with Kilbride’s exploits, could be a genuinely appealing scheme.

#####EOF##### The Flying Hospital That Rushes Wounded Soldiers to Safety | WIRED
The Flying Hospital That Rushes Wounded Soldiers to Safety

The Flying Hospital That Rushes Wounded Soldiers to Safety

The Flying Hospital That Rushes Wounded Soldiers to Safety

Ryan Young

The key to getting a wounded soldier from a battlefield to a hospital is stabilization—holding off the damage done by bullet or bomb for long enough to get to surgery. So faster evacuation is always better. Now the hospital can actually meet the injured partway—in the form of a Boeing C-17 Globemaster III, transformed into a flying triage unit. On board, doctors stabilize, monitor, and treat soldiers with high-level care so they make it safely home.

The Ward

Stable patients go to the ward (pictured), which can accommodate dozens of patients in stacked, bunk-bed-like pallets.

__Flight Path __

If an unstable patient needs to avoid turbulence, the medical teams use noise-canceling headsets to discuss route adjustments with the aircrew. They can also request an altitude change to alter cabin pressure if, say, air trapped inside a patient’s body might expand and damage tissue.

Emergency room

Patients enter the C-17 through the back of the cargo hold, where medics stabilize them using resuscitation, intubation, and tourniquets. Then they assign them to the other medical teams (there are three!) for in-flight follow-up.

__Surgery __

If a patient begins to decline rapidly during flight, doctors can insert chest drainage tubes and make emergency airway incisions. The operating room is equipped for abdominal surgery and open-heart massages too, though nobody has needed them yet.

__High Temperatures __

Burn victims need to stay warm to avoid hypothermia. Eighty-four strip heaters warm the floor panels from below, helping the flight crew crank the cabin temperature as high as 90 degrees Fahrenheit.

Medical Oxygen

Soldiers whose lungs can’t oxygenate their blood have a flight- optimized extra- corporeal membrane oxygenation machine to do it for them. It pulls oxygen from tanks in the plane’s nose and pumps it into the blood.

Increased Range

The sooner patients can get to the ground, the better: Planes with limited range used to hopscotch between airfields, but the C-17’s in-flight refueling makes for faster nonstop trips.

Critical Care

The ICU is equipped with pacemakers, IV fluids, and drugs for treating septic shock.

COURTESY OF THE U.S. AIR FORCE | Ryan Young

#####EOF##### The Best Alternative For Every Facebook Feature | WIRED
Deleting Facebook? Here Are the Best Alternatives For What You'll Miss

Deleting Facebook? Here Are the Best Alternatives For What You'll Miss

If you're ready to quit Facebook, here's how to replace everything you might miss.
Mai Schotz

Deleting Facebook? Here Are the Best Alternatives For What You'll Miss

If you're ready to quit Facebook, here's how to replace everything you might miss.
Mai Schotz

By now, you're probably aware of the hurricane tearing its way through Facebook. Over the weekend, both The Guardian and The New York Times published explosive reports about the improper use of data belonging to 50 million Facebook users by Trump campaign-affiliated data firm Cambridge Analytica.

The incident is the most high-profile misuse of Facebook's systems to become public, but it's far from the only one. Russian propagandists slipped through Facebook's advertising safeguards to try to influence the 2016 presidential election. In 2014, the social network allowed academics to use the News Feed to tinker with users' emotions. The United Nations even said earlier this month that Facebook played a role in exacerbating the genocide of the Rohingya people in Myanmar. Facebook itself has admitted that mindlessly scrolling on its platform isn't good for you.

If all that has you thinking about deleting Facebook entirely, you're far from alone. (Quitting the social network is also somewhat of a first-world privilege, since for many people Facebook functions as the entire internet itself.) But going cold turkey can be hard; Facebook actually provides useful services sometimes, and there's no one-for-one replacement.

Fortunately, you can pretty easily cobble together anything you might miss from Facebook with a combination of apps and services. It won't be the exact same, but at least you'll be less tempted to go back.

News Feed

Lots of services can feed you the latest news. Facebook, though, displays the specific stories your friends and family are talking about. If you value that feature, Nuzzel is a great choice. You can sync the app to other social networks you might use, like Twitter and LinkedIn, and it will feed you the articles your friends, as well as friends of friends, are talking about. The app also has a "Best of Nuzzel" feature where you can see the stories being widely discussed across the whole platform.

For more general news that can delight and surprise, try Digg, an aggregation site that prioritizes deeply reported features on a range of topics as well as lots of fun and quirky news stories. And of course, iPhone and iPad owners can always just fire up Apple News if they don't want to bother setting up a whole new system. None of those fit the bill? Here's a deeper look at Facebook News Feed alternatives.

Messenger

One of Facebook's most useful features isn't the main app itself, but its spinoff app Messenger. But while Messenger makes it easy to chat with Facebook friends, it's also confusing and riddled with unnecessary clutter. If you're looking for a clean and easy-to-use messaging app, try Signal. It's a free, end-to-end encrypted messaging service, approved by security researchers, that sticks to the basics. There are no animated stickers or fancy chat bots, but Signal does an excellent job of keeping you securely connected to your friends and family.

If you're looking for a clean and easy-to-use messaging app, try Signal.

It also has a desktop version, allowing you to sync messages between your computer and phone, just like on Messenger. Signal can import your contacts, so it's easy to start a thread with anyone you already have saved in your phone. Signal also has several additional security features that might come in handy if you're aiming to avoid surveillance, like the ability to set messages to delete after a certain amount of time. You can also use Signal to make voice and video calls, just like on Messenger. There are absolutely no advertisements, and the app does not collect your personal information.

Yes, WhatsApp also offers encrypted messaging, using the same underlying protocol as Signal. But Facebook owns WhatsApp—and can extract some metadata from its users—which defeats the purpose of trying to rid you life of the social network. Besides, even WhatsApp cofounder Brian Acton says it's time to delete Facebook.

Events

One of the primary reasons to stay on Facebook is not to miss an invite to a party or other event. It's worth unpacking that notion in the first place: If your friend or family member doesn't realize you're not on Facebook, do they really value your presence at the event they're planning? If someone genuinely wants you somewhere, they'll find a way to invite you, Facebook or no.

From the planning side, collecting peoples' contact info can be a pain, sure. But that's a one-time bother. From there, use Paperless Post for beautiful and functional email invites and RSVP tracking. And for more rote calendar-coordination, use Doodle to find the best day for a dinner or meeting that works for everyone. The site lets each guest respond with a time that works for them, so you can easily figure out how best to accommodate everyone's schedule.

Birthday Reminders

Another worry with deleting Facebook is that without it, you won't be able to remember anyone's birthday. Luckily, there's a way to export your friends' birthdays directly from Facebook before you delete your account. First, log into the social network, then click Events on the left-hand side. Toward the bottom, there's an option to add events to your calendar of choice, like Microsoft Outlook, Google Calendar, or Apple Calendar. There, tap "Learn More." You'll be led to a full set of instructions for how to export all your friends' birthdays.

If you're friends with hundreds or thousands of people on Facebook, it understandably might not be worthwhile to put them all in your Gcal. In this case, it might be easiest just to take 20 minutes or so to add your close friends and family member's special days to your calendar. And really, did the annual onslaught of best wishes on Facebook add much to your life in the first place?

Marketplace

In 2016, Facebook introduced Marketplace, a feature allowing users to buy and sell items from people in their communities. As a replacement, consider Nextdoor, an app designed to keep you in the loop about what's happening in your neighborhood. It has a free and for sale section that, like Marketplace, emphasizes local offerings, and feels less sketchy than Craigslist.

Groups

Groups are the hardest feature of Facebook to replace, since they serve a wide range of purposes for different people. If you're looking to organize friends and family in one place, GroupMe is a great choice. The app helps create an organized group chat, where you can share photos and messages. If you're looking for a larger circle of people interested in the same topic, there's almost certainly a sub-group on Reddit to fill your needs. The forum site has active communities organized around everything from skincare to anime.

Third-Party Logins

For many people, Facebook accounts have become de-facto identities across the internet, thanks to the social network's integration with third-party apps like Tinder and Spotify. When you sign up for a service using Facebook instead of filling out a form with your personal information, deleting that Facebook account creates additional headaches.

The best replacement is a password manager, which can store your credentials for every site you use in one place. It can also generate a new, secure password every time you sign up for a new website or service. Here's an in-depth guide to choosing the best password manager for you and why you should be using one. Our two favorite picks are 1Password and LastPass.

While you'll still need to provide information like your name and email address—you usually don't need to manually input this info if you sign up with Facebook—using a password manager will prevent third-party apps from collecting the personal information you've provided to the social network.

One word of warning: Many dating apps require Facebook integration to work, meaning you won't be able to use them if you delete your account. You can still create a Tinder account without Facebook, but you will loose all your current matches and conversations. Hinge and Bumble require you to have a Facebook account to sign up, though the latter company says it's working on dropping that requirement.

One Last Consideration

While deleting Facebook might feel like a step in a more private direction, it's ultimately not going to do much to change the online digital economy that profits by collecting your personal information and selling it to data brokers. Facebook collects arguably the most private information, but plenty of other popular social networking apps like Snapchat and Twitter collect your data too. That's their entire business model: When you're not paying for a product, you are the product. Even your internet-service provider is likely collecting your personal information. In fact, through its expansive ad network, Facebook even collects info from people who aren't even on the platform.

Still, deleting your Facebook account will prevent some of your personal info from being sucked up, and might make you feel better too. And with a few choice downloads, you won't miss a thing.

More (Or Less) Facebook

#####EOF##### Security | WIRED

Latest Videos

More Videos
Security

Decoding Robert Mueller's Russia Investigation | WIRED25

WIRED contributing editor Garrett M. Graff, who covers special counsel Robert Mueller's Russia probe, authored the magazine's June cover story about Mueller's time in Vietnam, and wrote "The Threat Matrix: Inside Robert Mueller's FBI and the War on Global Terror." Graff breaks down the investigation's status, the big outstanding questions, and where the investigation is likely to go after the midterm election.

#####EOF##### GitHub Open Sources a Tool That Teaches Students to Code | WIRED
GitHub Open Sources a Tool That Teaches Students to Code

GitHub Open Sources a Tool That Teaches Students to Code

GitHub

GitHub Open Sources a Tool That Teaches Students to Code

GitHub

GitHub is a way for software engineers to share, shape, and collaborate on code. And it's also a good way of teaching people to do the same thing.

John Britton is GitHub's "education liaison." That means he helps bring GitHub to schools and college campuses. In recent years, the sweeping online service has remade the way coders build software across Silicon Valley and beyond, and now, according to Britton, it's changing the way that teachers teach coding. After all, GitHub is all about working on code together.

Hundreds of thousands of students are enrolled in GitHub's various education programs, Britton says, and more than 3,000 teachers are using GitHub as a teaching tool. "It's becoming more and more popular," he tells WIRED. "We're definitely headed towards using more real tools in the classroom."

Mark Tareshawty will tell you something similar. Now a senior in the computer science department at Ohio State University and a teaching assistant in the university's web apps course, he has seen firsthand the rise of GitHub in education. GitHub gives teachers a way of readily sharing code and coding assignments with students as they learn the craft of building software. Teachers can also use it to teach collaborative coding, an important skill in the modern world of pair programming. Nowadays, that's how software is built.

"When I started in computer science, there wasn't a whole lot of collaboration, there wasn't a whole lot of teamwork. You worked by yourself. You didn't talk to anybody," Tareshawty says, before pointing out that he started just three or four years ago. "But I'm now using GitHub as a teaching assistant, and it has really changed the way that people think....it feels more like what we would do when working out in the [professional world]."

The problem, he says, is that sharing assignments in this way isn't as easy as it could be. That's why he built Classroom for GitHub, a tool meant to significantly streamline the process. Basically, it lets teachers invite students onto GitHub and create and share coding assignments through the service. Teachers can send a single URL to students, Tareshawty says. Once they click on it, they're automatically set up to view, modify, and collaborate on code.

The tool dovetails with GitHub Education, a service that provides classrooms with free private code repositories where teachers and students can post code and collaborate. Naturally, Tareshawty's tool is open source, like so much on Github, meaning it's freely available to the world at large. Now GitHub plans to release it later today, after Tareshawty built the new tool as part of the GitHub Summer of Code program, which provides stipends for student open source projects.

Classroom for Github is part of a larger effort to improve computer science education through internet services. From Codecademy to Khan Academy and more, online courses for learning how to code are available not just to high school and university students, but, well, anyone. Want to learn how to tell a computer what to do? Just turn on your computer.

#####EOF##### I Bought Used Voting Machines on eBay for $100 Apiece. What I Found Was Alarming | WIRED
I Bought Used Voting Machines on eBay for $100 Apiece. What I Found Was Alarming

I Bought Used Voting Machines on eBay for $100 Apiece. What I Found Was Alarming

Mike Brown/The Commercial Appeal/AP

I Bought Used Voting Machines on eBay for $100 Apiece. What I Found Was Alarming

Mike Brown/The Commercial Appeal/AP

In 2016, I bought two voting machines online for less than $100 apiece. I didn't even have to search the dark web. I found them on eBay.

Surely, I thought, these machines would have strict guidelines for lifecycle control like other sensitive equipment, like medical devices. I was wrong. I was able to purchase a pair of direct-recording electronic voting machines and have them delivered to my home in just a few days. I did this again just a few months ago. Alarmingly, they are still available to buy online.

WIRED OPINION

ABOUT

Brian Varner is a Symantec special projects researcher on the Cyber Security Services team, leading the company's CyberWar Games and emerging technologies development. He previously worked at the National Security Agency as a tactical analyst.

If getting voting machines delivered to my door was shockingly easy, getting inside them proved to be simpler still. The tamper-proof screws didn’t work, all the computing equipment was still intact, and the hard drives had not been wiped. The information I found on the drives, including candidates, precincts, and the number of votes cast on the machine, were not encrypted. Worse, the “Property Of” government labels were still attached, meaning someone had sold government property filled with voter information and location data online, at a low cost, with no consequences. It would be the equivalent of buying a surplus police car with the logos still on it.

My aim in purchasing voting machines was not to undermine our democracy. I'm a security researcher at Symantec who started buying the machines as part of an ongoing effort to identify their vulnerabilities and strengthen election security. In 2016, I was conducting preliminary research for our annual CyberWar Games, a company-wide competition where I design simulations for our employees to hack into. Since it was an election year, I decided to create a scenario incorporating the components of a modern election system. But if I were a malicious actor seeking to disrupt an election, this would be akin to a bank selling its old vault to an aspiring burglar.

I reverse-engineered the machines to understand how they could be manipulated. After removing the internal hard drive, I was able to access the file structure and operating system. Since the machines were not wiped after they were used in the 2012 presidential election, I got a great deal of insight into how the machines store the votes that were cast on them. Within hours, I was able to change the candidates' names to be that of anyone I wanted. When the machine printed out the official record for the votes that were cast, it showed that the candidate's name I invented had received the most votes on that particular machine.

This year, I bought two more machines to see if security had improved. To my dismay, I discovered that the newer model machines—those that were used in the 2016 election—are running Windows CE and have USB ports, along with other components, that make them even easier to exploit than the older ones. Our voting machines, billed as “next generation,” and still in use today, are worse than they were before—dispersed, disorganized, and susceptible to manipulation.

To be fair, there has been some progress since the last Presidential election, including the development of internal policies for inspecting the machines for evidence of tampering. But while state and local election systems have been conducting risk assessments, we’ve also seen an 11-year-old successfully hacking a simulated voting website at DefCon, for fun.

A recent in-depth report on voting machine vulnerabilities concluded that a perpetrator would need physical access to the voting machine to exploit it. I concur with that assessment. When I reverse-engineered voting machines in 2016, I noticed that they were using a smart card as a means of authenticating a user and allowing them to vote. There are many documented liabilities in certain types of smart cards that are used, from Satellite receiver cards to bank chip cards. By using a $15 palm-sized device, my team was able to exploit a smart chip card, allowing us to vote multiple times.

In most parts of the public and private sector, it would be unthinkable that such a sensitive process would be so insecure. Try to imagine a major bank leaving ATMs with known vulnerabilities in service nationwide, or a healthcare provider identifying a problem in how it stores patient data, then leaving it unpatched after public outcry. It just doesn’t fit with our understanding of cyber security in 2018.

Those industries are governed by regulations that outline how sensitive information and equipment must be handled. The same common-sense regulations don’t exist for election systems. PCI and HIPAA are great successes that have gone a long way in protecting personally identifiable information and patient health conditions. Somehow, there is no corollary for the security of voters, their information and, most importantly, the votes they cast.

Since these machines are for sale online, individuals, precincts, or adversaries could buy them, modify them, and put them back online for sale. Envision a scenario in which foreign actors purchased these voting machines. By reverse engineering the machine like I did to exploit its weaknesses, they could compromise a small number of ballot boxes in a particular precinct. That's the greatest fear of election security researchers: not wholesale flipping of millions of votes, which would be easy to detect, but a small, public breach of security that would sow massive distrust throughout the entire election ecosystem. If anyone can prove that the electoral process can be subverted, even in a small way, repairing the public's trust will be far costlier than implementing security measures.

I recognize that states are fiercely protective of their rights. But there’s an opportunity here to develop nationwide policies and security protocols that would govern how voting machines are secured. This could be accomplished with input from multiple sectors, in a process similar to the development of the NIST framework—now widely recognized as one of the most comprehensive cybersecurity frameworks in use.

Many of the rules we believe should be put into place are uncomplicated and inexpensive. For starters, we can institute lifecycle management of the components that make up the election system. By simply regulating and monitoring the sale of used voting machines more closely, we would create a huge barrier to bad actors.

The fact that information is stored unencrypted on hard drives simply makes no sense in the current threat environment. That they can be left on devices, unencrypted, that are then sold on the open market is malpractice.

Finally, we must educate our poll workers and voters to be aware of suspicious behavior. One vulnerability we uncovered in voting machines is the chip card used in electronic voting machines. This inexpensive card can be purchased for $15 and programmed with simple code that allows the user to vote multiple times. This is something that we believe could be avoided with well-trained, alert poll workers.

Time and effort are our main obstacles to better policies. When it comes to securing our elections, that’s a low bar. We must do better; the alternative is too scary to consider in our current environment. Through increased training, public policy, and a little common sense, we can greatly enhance the security and integrity of our electoral process.

WIRED Opinion publishes pieces written by outside contributors and represents a wide range of viewpoints. Read more opinions here.


More Election Coverage from WIRED

#####EOF##### AI Can Help Cybersecurity—If It Can Fight Through the Hype | WIRED
AI Can Help Cybersecurity—If It Can Fight Through the Hype

AI Can Help Cybersecurity—If It Can Fight Through the Hype

There are a ton of claims around AI and cybersecurity that don't quite add up. Here's what's really going on.
Alyssa Foote

AI Can Help Cybersecurity—If It Can Fight Through the Hype

There are a ton of claims around AI and cybersecurity that don't quite add up. Here's what's really going on.
Alyssa Foote

Walking the enormous exhibition halls at the recent RSA security conference in San Francisco, you could have easily gotten the impression that digital defense was a solved problem. Amidst branded t-shirts and water bottles, each booth hawked software and hardware that promised impenetrable defenses and peace of mind. The breakthrough powering these new panaceas? Artificial intelligence that, the sales pitch invariably goes, can instantly spot any malware on a network, guide incident response, and detect intrusions before they start.

That rosy view of what AI can deliver isn't entirely wrong. But what next-generation techniques actually do is more muddled and incremental than marketers would want to admit. Fortunately, researchers developing new defenses at companies and in academia largely agree on both the potential benefits and challenges. And it starts with getting some terminology straight.

"I actually don't think a lot of these companies are using artificial intelligence. It's really training machine learning," says Marcin Kleczynski, CEO of the cybersecurity defense firm Malwarebytes, which promoted its own machine learning threat detection software at RSA. "It's misleading in some ways to call it AI, and it confuses the hell out of customers."

Rise of the Machines

The machine learning algorithms security companies deploy generally train on large data sets to "learn" what to watch out for on networks and how to react to different situations. Unlike an artificially intelligent system, most of the security applications out there can't extrapolate new conclusions without new training data.

Machine learning is powerful in its own right, though, and approach is a natural fit for antivirus defense and malware scanning. For decades AV has been signature-based, meaning that security companies identify specific malicious programs, extract a sort of unique fingerprint for each of them, and then monitor customer devices to ensure that none of those signatures appear.

'It's misleading in some ways to call it AI, and it confuses the hell out of customers.'

Marcin Kleczynski, Malwarebytes

Machine learning-based malware scanning works in a somewhat similar manner—the algorithms train on vast catalogues of malicious programs to learn what to look for. But the ML approach has the added benefit of flexibility, because the scanning tool has learned to look for characteristics of malware rather than specific signatures. Where attackers could stymie traditional AV by making just slight alterations to their malicious tools that would throw off the signature, machine learning-based scanners, offered by pretty much all the big names in security at this point, are more versatile. They still need regular updates with new training data, but their more holistic view makes a hacker's job harder.

"The nature of malware constantly evolves, so the people who write signatures for specific families of malware have a huge challenge," says Phil Roth, a data scientist at the machine learning security firm Endgame, that has its own ML-driven malware scanner for Windows systems. With an ML-based approach, "the model you train definitely needs to reflect the newest things that are out there, but we can go on a little bit of a slower pace. Attackers often build on old frameworks or use code that already exists, because if you write malware from scratch it's a lot of effort for an attack that might not have a large payoff. So you can learn from all the techniques that exist in your training set, and then recognize patterns when attackers come out with something that’s only slightly new."

Similarly, machine learning has become indispensable in the fights against spam and phishing. Elie Bursztein, who leads the anti-abuse research team at Google, notes that Gmail has used machine learning techniques to filter emails since its launch 18 years ago. But as attack strategies have evolved and phishing schemes have become more pernicious, Gmail and other Google services have needed to adapt to hackers who specifically know how to game them. Whether attackers are setting up fake (but convincing-looking) Google Docs links or tainting a spam filter's idea of which messages are malicious, Google and other large service providers have increasingly needed to lean on automation and machine learning to keep up.

As a result, Google has found applications for machine learning in almost all of its services, especially through an ML technique known as deep learning, which allows algorithms to do more independent adjustments and self-regulation as they train and evolve. "Before we were in a world where the more data you had the more problems you had," Bursztein says. "Now with deep learning, the more data the better. We are preventing violent images, scanning comments, detecting phishing and malware in the Play Store. We use it to detect fraudulent payments, we use it for protecting our cloud, and detecting compromised computers. It’s everywhere."

At its core, machine learning's biggest strength in security is training to understand what is "baseline" or "normal" for a system, and then flagging anything unusual for human review. This concept applies to all sorts of ML-assisted threat detection, but researchers say that the machine learning-human interplay is the crucial strength of the techniques. In 2016, IBM estimated that an average organization deals with over 200,000 security events per day.

Machine learning's most common role, then, is additive. It acts as a sentry, rather than a cure-all.

"It's like there’s a machine learning assistant that has seen this before sitting next to the analyst," says Koos Lodewijkx, vice president and chief technology officer of security operations and response at IBM Security. The team at IBM has increasingly leaned on its Watson computing platform for these "knowledge consolidation" tasks and other areas of threat detection. "A lot of work that’s happening in a security operation center today is routine or repetitive, so what if we can automate some of that using machine learning or just make it easier for the analyst?" Lodewijkx says.

The Best Offense

Though many machine learning tools have already shown promising results in providing defense, researchers almost unanimously warn about the ways attackers have begun to adopt machine learning techniques themselves. And more of these types of attacks are on the horizon. Examples already exist in the wild, like hacking tools that use machine vision to defeat Captchas.

Another present threat to machine learning is data poisoning. If attackers can figure out how an algorithm is set up, or where it draws its training data from, they can figure out ways to introduce misleading data that builds a counter-narrative about what content or traffic is legitimate versus malicious. For example, attackers may run campaigns on thousands of accounts to mark malicious messages or comments as "Not Spam" in an attempt to skew an algorithm's perspective.

'People should just be aware that this technology has limitations.'

Battista Biggio, University of Cagliari

In another example, researchers from the cloud security firm Cyxtera built a machine learning-based phishing attack generator that trained on more than 100 million particularly effective historic attacks to optimize and automatically generate effective scam links and emails. "An average phishing attacker will bypass an AI-based detection system 0.3 percent of the time, but by using AI this 'attacker' was able to bypass the system more than 15 percent of the time," says Alejandro Correa Bahnsen, Cyxtera's vice president of research. "And we wanted to be as close as possible to how an actual attacker would build this. All the data was data that would be available to an attacker. All the libraries were open source."

Researchers note that this is why it is important that ML systems are set up to encourage "human in the loop," so systems aren't sole, autonomous arbiters. ML systems "should have the option to say 'I have not seen this before' and ask help from a human," says Battista Biggio, an assistant professor at the University of Cagliari, Italy, who studies machine learning security. "There’s no real intelligence in there—it’s inferences from data, correlations from data. So people should just be aware that this technology has limitations."

To this end, the research community has worked to understand how to reduce the blind spots in ML systems so they can be hardened against attacks on those weaknesses. At RSA, researchers from Endgame released an open source threat data training set called EMBER, with the hope that they can set an example, even among competing companies, to focus on collaboration in security ML. "There are good reasons that the security industry doesn’t have as many open data sets," Endgame's Roth says. "These kinds of data might have personally identifying information or might give attackers information about what a company’s network architecture looks like. It took a lot of work to sanitize the EMBER dataset, but my hope is to spur more research and get defenders to work together."

That collaboration may be necessary to stay ahead of attackers using machine learning techniques themselves. There's real promise behind machine learning in cybersecurity, despite the overwhelming hype. The challenge is keeping expectations in check.

Machine vs Machine

#####EOF##### WIRED

Latest Videos

More Videos
Science

Why Averaging 95% From the Free-Throw Line is Almost Impossible

The very best basketball free throw shooters can sink the ball about 90 percent of the time. What would it take to get to 95 percent? WIRED's Robbie Gonzalez steps up to the foul line with top shooter Steve Nash to find out.

#####EOF##### Sci-Fi Writers Are Imagining a Path Back to Normality | WIRED
Skip Article Header. Skip to: Start of Article.

Sci-Fi Writers Are Imagining a Path Back to Normality

Skip Comments. Skip to: Footer. View comments
#####EOF##### Flame Windows Update Attack Could Have Been Repeated in 3 Days, Says Microsoft | WIRED
Flame Windows Update Attack Could Have Been Repeated in 3 Days, Says Microsoft

Flame Windows Update Attack Could Have Been Repeated in 3 Days, Says Microsoft

Flame Windows Update Attack Could Have Been Repeated in 3 Days, Says Microsoft

When the sophisticated state-sponsored espionage tool known as Flame was exposed last year, there was probably no one more concerned about the discovery than Microsoft, after realizing that the tool was signed with an unauthorized Microsoft certificate to verify its trustworthiness to victim machines. The attackers also hijacked a part of Windows Update to deliver it to targeted machines.

After examining the nature of the certificate attack and everything the malicious actors needed to know to pull it off, Microsoft engineers estimated that they had about twelve days to fix the weaknesses it exploited before other, less sophisticated actors would be able to repeat the attack on Windows machines.

But then Microsoft conducted some tests to recreate the steps that copycat attackers would have to follow and discovered that it would take just three days in fact to repeat the Windows Update and certificate portion of the attack in order to deliver other signed malware to victim machines.

"So that's when we switched to Plan B," says Mike Reavey, senior director of the Microsoft Security Response Center, speaking at the RSA Security Conference on Thursday.

Reavey relayed the actions his team took after Kaspersky Lab discovered Flame last year, and highlighted how little time response teams have these days to fix dangerous threats before copycat attackers can learn and repeat them.

Flame was a massive and highly sophisticated spy kit that was found infecting systems in Iran and elsewhere and was believed to be part of a well-coordinated ongoing, state-run cyberespionage operation.

It was created by the same group that made Stuxnet, believed to be Israel and the U.S., and targeted systems in Iran, Lebanon, Syria, Sudan, the Israeli Occupied Territories and other countries in the Middle East and North Africa for at least two years before being discovered.

One of the most disturbing aspects of Flame, however, was its devious subversion of the Windows Update client on targeted machines to spread the malware within a company or an organization's network.

After Kaspersky released samples of the malware on May 28, 2012, Microsoft discovered that Flame used a man-in-the-middle attack that subverted the Windows Update client to spread.

The Windows Update attack didn't involve a breach of Microsoft's network and never affected the Windows Update service that delivers security patches and other updates to customer machines. Instead it focused on compromising the process for updating the Windows Update client itself that sits on a customer machine.

The Windows Update client regularly checks for a new version of the client to download and update itself, using a series of files from Microsoft servers that are signed with a Microsoft certificate. But in this case when the Windows Update client on machines sent out a beacon, it got intercepted in a man-in-the-middle attack by a compromised machine on the victim's network that the attackers already controlled, which then redirected any machines beaconing out to Microsoft for a client update to download a malicious file masquerading as a Windows Update client file. The file was signed with a rogue Microsoft certificate that the attackers obtained after conducting an MD5 collision on the hash.

To generate their fake certificate, the attackers exploited a vulnerability in the cryptography algorithm that Microsoft used for enterprise customers to set up Remote Desktop service on machines. The Terminal Server Licensing Service provides certificates with the ability to sign code, which is what allowed the Flame file to be signed as if it came from Microsoft.

The attackers needed to conduct the collision attack in order to have a certificate that would get Flame onto systems that were using the Windows Vista operating system or later. To recreate these specific steps would take copycat attackers a lot of time and resources.

But Microsoft realized that other attackers wouldn't need to do all of this work; they could simply use a less-modified version of a rogue certificate that would still be acceptable to Windows XP machines. Microsoft found it would take only three days for hackers to figure out how the certificates were structured in order to obtain one and how to then subvert the Windows Update client using a man-in-the-middle attack to get a malicious file signed with it onto systems.

On June 3, Microsoft announced that it had discovered the Windows Update attack in Flame and rolled out a series of fixes that included revoking three unauthorized certificates. The company also hardened the certificate channel.

"We didn't just revoke the malicious certificates used by Flame," Reavey said. "We revoked the [certificate authority]. So any certificate that might have been ever issued were no longer trusted by any version of Windows.... The main thing we did there was we pin the code-signing check to a specific and unique CA that's only used by the Windows Update client."

Microsoft also created an update for the Windows Update client to prevent a man-in-the-middle attack from occurring and added a system for easily revoking unauthorized certificates in the future through a trusted list.

"We didn't want to have to ship a patch to Windows machines to have Windows not trust certificates anymore," he said. "We took a feature that was included in Windows 8 and we back-ported it all the way down to Windows Vista. Where now every 24 hours a trust list will be checked on the system and if there is anything we put in the untrusted store, it will be updated relatively immediately."

#####EOF##### LEXUS DESIGN AWARDS | WIRED
Skip Article Header. Skip to: Start of Article.

LEXUS DESIGN AWARDS

Skip Comments. Skip to: Footer. View comments
#####EOF##### If China Hacked Marriott, 2014 Marked a Full-on Assault | WIRED
If China Hacked Marriott, 2014 Marked a Full-on Assault

If China Hacked Marriott, 2014 Marked a Full-on Assault

China's role in the Marriott hack remains unconfirmed, but the accusation comes amid already heightened tensions between the United States and China over trade and intellectual property theft.
Ralf Hirschberger/Picture Alliance/Getty Images

If China Hacked Marriott, 2014 Marked a Full-on Assault

China's role in the Marriott hack remains unconfirmed, but the accusation comes amid already heightened tensions between the United States and China over trade and intellectual property theft.
Ralf Hirschberger/Picture Alliance/Getty Images

The massive data breach that affected 500 million Marriott customers feels like a recent event, given that the company just discovered and disclosed it over the past four months. But it's important to remember that the attack began much earlier, especially as Reuters and others have reported that state-sponsored Chinese hackers were behind it. If that attribution holds up, China's broader hacking campaign against the US in 2014 will go down as a historic assault.

China's role in the Marriott hack remains unconfirmed, but the accusation comes amid already heightened tensions between the United States and China over trade and intellectual property theft. The Department of Justice is expected to announce indictments against a new wave of Chinese hackers soon.

"These are exactly the targets I would select."

Crane Hassold, Former FBI Analyst

If China did perpetrate the Marriott hack in 2014, though, that would make it just one of several devastating, roughly concurrent cyberattacks against the United States. That same year, Chinese actors pilfered extremely sensitive and expansive data on tens of millions of US citizens from the Office of Personnel Management. That assault appears to have begun during the first months of 2014—initially detected by OPM in March of that year. And in February 2014, Chinese hackers allegedly breached Anthem insurance, stealing the names, birth dates, addresses, Social Security numbers, and even income data of 80 million people.

Throughout 2015, analysts noted the intelligence value to China of gathering in-depth information on so many people from multiple sources. The diversity of data could allow Chinese espionage agents to check and cross-reference information and track individuals over time. And if you throw the Marriott data into the mix, which included passport numbers like the OPM trove, the espionage effort seems even more comprehensive.

"If I were a foreign intelligence service and wanted to get a complete picture about a specific group of people, these are exactly the targets I would select," says Crane Hassold, senior director of threat research at the phishing defense firm Agari who previously worked as a digital behavior analyst for the FBI. "OPM contained comprehensive data on government employees, Anthem contained detailed personal information, and Marriott contained travel records. From a foreign intelligence perspective it would be very useful."

Taken all together, China's 2014 hacking spree could potentially have revealed data on virtually every adult in the US. And while details about the hacks have trickled out slowly over many years, they all appear to come from a single hacking initiative, albeit perpetrated, presumably, by multiple different hacking groups and actors working under the same umbrella. With the sheer quantity of information collectively gleaned from the attacks, Chinese intelligence analysts could track everything from population trends to more granular details, like mapping personal relationships.

China consistently denied corporate hacking allegations during the timeframe of these intrusions. But while the US government hasn't formally made an attribution in the Marriott case, secretary of state Mike Pompeo seemed to confirm that China was behind it in a Fox & Friends interview Wednesday morning.

LEARN MORE

The WIRED Guide to Data Breaches

On the heels of the attacks, the US and China agreed to a landmark digital truce in 2015 that banned digital assaults on private companies to steal trade secrets. The détente seemed successful for a while, but over the last 18 months China has gradually eroded the agreement, pushing its boundaries and ramping up hacking efforts in areas outside of the deal's scope. But even at the time of the deal, China may have known that it already had enough active corporate compromises to carry its espionage efforts despite laying off on new targets.

And even then, less hacking doesn't mean no hacking. "All the data that we had certainly indicated a decrease in activity following the agreement, but a decrease does not mean it went to zero," says J. Michael Daniel, who served as White House cybersecurity coordinator during the Obama administration. "Of course it didn’t, and we never expected it to."

Still, whatever respite the agreement provided seems to have slowly worn away. And the scope of Chinese hacking in 2014 now appears even more extensive than it already seemed. The US government now faces both current digital threats from China, and the possibility that still more revelations about 2014 will eventually emerge.


More Great WIRED Stories

#####EOF##### Everything We Know About Ukraine's Power Plant Hack | WIRED
Everything We Know About Ukraine's Power Plant Hack

Everything We Know About Ukraine's Power Plant Hack

Getty Images

Everything We Know About Ukraine's Power Plant Hack

Getty Images

When the US government demonstrated in 2007 how hackers could take down a power plant by physically destroying a generator with just 21 lines of code, many in the power industry dismissed the demo as far-fetched. Some even accused the government of faking the so-called Aurora Generator Test to scare the public.

That attack would certainly require a lot of skill and knowledge to pull off, but hackers don't need to destroy mega-size equipment to plunge a community into darkness. The recent hack of electric utilities in Ukraine shows how easy it can be to cut electricity, with the caveat that taking down the grid isn't always the same as keeping it down.

In the run-up to holidays last month, two power distribution companies in Ukraine said that hackers had hijacked their systems to cut power to more than 80,000 people. The intruders also sabotaged operator workstations on their way out the digital door to make it harder to restore electricity to customers. The lights came back on in three hours in most cases, but because the hackers had sabotaged management systems, workers had to travel to substations to manually close breakers the hackers had remotely opened.

Days after the outage, Ukrainian officials appeared to blame Russia for the attack, saying that Ukraine's intelligence service had detected and prevented an intrusion attempt "by Russian special services" against Ukraine's energy infrastructure. Last week, speaking at the S4 security conference, former NSA and CIA spy chief Gen. Michael Hayden warned that the attacks were a harbinger of things to come for the US, and that Russia and North Korea were two of the most likely culprits if the US power grid were ever hit.

If hackers were responsible for the outages in Ukraine, these would be the first known blackouts ever caused by a cyberattack. But just how accurate are the news reports? How vulnerable are US systems to similar attacks? And just how solid is the attribution that Russia did it?

To separate fact from speculation, we've collected everything we know and don't know about the outages. This includes new information from a Ukrainian expert involved in the investigation, who says at least eight utilities in Ukraine were targeted, not two.

What exactly occurred?

Around 5:00 p.m. on Dec. 23, as Ukrainians were finishing their workday, the Prykarpattyaoblenergo electric utility in Ivano-Frankivsk Oblask, a region in Western Ukraine, posted a note on its web site saying it was aware that power was out in the region's main city, Ivano-Frankivsk. The cause was still unknown, and the company urged customers not to call its service center, since workers had no idea when power might be restored.

Half an hour later, the company posted another note saying the outage had begun around 4 p.m. and was more widespread than previously believed; it had actually affected eight provinces in the Ivano-Frankivsk region. Ukraine has 24 regions, each of which has 11 to 27 provinces, with a different power company serving each region. Although electricity was by then restored to the city of Ivano-Frankivsk, workers were still trying to get power to the rest of the region.

Then the company made the startling revelation that the outage was likely caused by "interference by outsiders" who gained access to its control system. The company also said that due to a barrage of calls, its call center was having technical difficulties.

Around the same time, a second company, Kyivoblenergo, announced that it also had been hacked. The intruders disconnected breakers for 30 of its substations, killing electricity to 80,000 customers. And, it turned out, Kyivoblenergo had received a flood of calls, too, according to Nikolay Koval, who was head of Ukraine's Computer Emergency Response Team until he left in July and is assisting the companies in investigating the attacks. Instead of coming from local customers, Koval told WIRED that the calls appeared to come from abroad.

It took weeks before more details came out. In January, Ukrainian media said the perpetrators hadn't just cut power; they had also caused monitoring stations at Prykarpattyaoblenergo to go "suddenly blind." Details are scarce, but the attackers likely froze data on screens, preventing them from updating as conditions changed, making operators believe power was still flowing when it wasn't.

To prolong the outage, they also evidently launched a telephone denial-of-service attack against the utility's call center to prevent customers from reporting the outage. TDoS attacks are similar to DDoS attacks that send a flood of data to web servers. In this case, the center's phone system was flooded with bogus calls to prevent legitimate callers from getting through.

Then at some point, perhaps once operators became aware of the outage, the attackers "paralyzed the work of the company as a whole" with malware that affected PCs and servers, Prykarpattyaoblenergo wrote in a note to customers. This likely refers to a program known as KillDisk that was found on the company's systems. KillDisk wipes or overwrites data in essential system files, causing computers to crash. Because it also overwrites the master boot record, infected computers can't reboot.

"The operators' machines were completely destroyed by those erasers and destroyers," Koval told WIRED.

Altogether, it was a multi-pronged attack that was well orchestrated.

"The capabilities used weren't particularly sophisticated but the logistics, planning, use of three methods of attack, coordinated strike against key sites, etc. was extremely well sophisticated," says Robert M. Lee, a former Cyber Warfare Operations Officer for the US Air Force and co-founder of Dragos Security, a critical infrastructure security company.

How many electric utilities were hacked?

Only two admitted being hacked. But Koval says "we are aware of six more companies. We witnessed hacks in up to eight regions of Ukraine. And the list of the attacked may be far bigger than we are aware of."

Koval, who is now CEO of the Ukrainian security firm CyS Centrum, says it's not clear if the other six also experienced blackouts. It's possible they did but that operators fixed them so quickly customers weren't affected, and therefor the companies never disclosed it.

When did the hackers get in?

Also unclear. During the time he headed the Ukrainian CERT, Koval's team helped thwart an intrusion at a different power company. The breach began in March, 2015, with a spear-phishing campaign, and was still in early stages when Koval's team helped stop it in July. No power outage occurred, but they did find malware known as BackEnergy2 on systems, so-called for its use in past attacks against utilities in multiple countries, including the US. BlackEnergy2 is a trojan that opens a backdoor onto systems and is modular in nature so that plug-ins with additional capability can be added.

Why is this important? Because the KillDisk component found on Prykarpattyaoblenergo systems is used with BlackEnergy3, a more sophisticated variant of BlackEnergy2, possibly tying together the two attacks. Hackers have used BlackEnergy3 as a first-stage reconnaissance tool on networks in other intrusions in Ukraine, Koval says, and then installed BlackEnergy2 on specific computers. BlackEnergy3 has more capability than the earlier variant, so it's used first to get into networks and search for specific systems of interest. Once an interesting machine is found, BlackEnergy2, which is more of a pinpoint tool, is used to explore specific systems on the network.

Did BlackEnergy cause the outage?

Likely, no. The mechanics of the outage are clear—breakers on the grid somehow opened—but known variants of BlackEnergy3 aren't capable of doing that, and no other malware that is capable has been found on the Ukranian machines. Koval says the hackers likely used BlackEnergy3 to get into the utilities' business networks and maneuver their way to the production networks where they found operator stations. Once they were on those machines, they didn't need malware to take down the grid; they could simply control the breakers like any operator.

"It's very easy to get access to an operator's PC," Koval says, though it takes time to find them. The BlackEnergy attackers he tracked in July were very good at lateral movement through networks. "Once they hack and penetrate, they own all the network, all the key nodes," he says.

There has been speculation that KillDisk caused the outage when it wiped data from control systems. But SCADA systems don't work that way, notes Michael Assante, director of SANS ICS, which conducts cybersecurity training for power plant and other industrial control workers. "You can lose a SCADA system... and you never have a power outage," he says.

Did Russia do it?

Given the political climate, Russia makes sense. Tensions have been high between the two nations since Russia annexed Crimea in 2014. And right before the outages, pro-Ukrainian activists physically attacked a substation feeding power to Crimea, causing outages to the region Russia annexed. Speculation suggests that the recent blackouts in Western Ukraine were retaliation for that.

But as we've said before, attribution is a tricky business and can be used for political purposes.

The security firm iSight Partners, also thinks Russia is the culprit because BlackEnergy has been used before by a cybercriminal group iSight calls the Sandworm Team, which it believes is tied to the Russian government. That tie, however, is based only on the fact that the group's hacking campaigns appear to align with the interests of Putin's regime—targets have included Ukrainian government officials and members of NATO, for example. iSight also believes the BlackEnergy KillDisk module is (http://www.isightpartners.com/2016/01/ukraine-and-sandworm-team/).

But other security firms, like ESET, are less sure Russia is behind BlackEnergy, noting that the malware has undergone "significant evolution" since it appeared in 2010 and has targeted different industries in many countries. "There is no definite way of telling whether the BlackEnergy malware is currently operated by a single group or several," Robert Lipovsky, senior malware researcher at ESET, said recently.

This week Ukrainian authorities accused Russia of another hack—this one targeting the network of Kiev's main airport, Boryspil. There was no damage, however, and the accusation is based on the possibility that the airport found malware on its systems (that may be the same or related to BlackEnergy) and the command-and-control server used with the malware has an IP address in Russia.

Are US power systems vulnerable to the same attack?

Yes, to a degree. "Despite what's been said by officials in the media, every bit of this is doable in the US grid," says Lee. Though he says "the impact would have been different and we do have a more hardened grid than Ukraine." But recovery in the US would be harder because many systems here are fully automated, eliminating the option of switching to manual control if the SCADA systems are lost, as the Ukrainians did.

One thing is clear, the attackers in Ukraine could have done worse damage than they did, such as destroying power generation equipment the way the Aurora Generator Test did. How easy that is to do is up for debate. "But it certainly is within the specter of possibility," says Assante, who was one of the architects of that government test.

What the Ukrainian hackers did, he says, "is not the limit of what someone could do; this is just the limit of what someone chose to do."

#####EOF##### How GitHub Conquered Google, Microsoft, and Everyone Else | WIRED
How GitHub Conquered Google, Microsoft, and Everyone Else

How GitHub Conquered Google, Microsoft, and Everyone Else

Github offices
Ariel Zambelich/WIRED

Chris DiBona was worried everything would end up in one place.

This was a decade ago, before the idea of open source software flipped the tech world upside-down. The open source Linux operating system was already running an enormous number of machines on Wall Street and beyond, proving you can generate big value—and big money—by freely sharing software code with the world at large. But the open source community was still relatively small. When coders started new open source projects, they typically did so on a rather geeky and sometimes unreliable internet site called SourceForge.

Chris DiBona.

Google

DiBona, the long-haired open source guru inside Google, was worried that all of the world's open source software would end up in that one basket. "There was only one, and that was SourceForge," he says.

So, like many other companies, Google created its own site where people could host open source projects. It was called Google Code. The company had built its online empire on top of Linux and other open source software, and in providing an alternative to SourceForce, it was trying to ensure open source would continue to evolve, trying to spread this religion across the net.

But then GitHub came along and spread it faster.

Today, Google announced that after ten years, it's shutting down Google Code. The decision wasn't hard to predict. Over the past three years or so, the company has moved about a thousand projects off of the site. But its official demise is worth noting. Google Code is dying because most of the open source world—a vast swath of the tech world in general—now houses its code on GitHub, a site bootstrapped by a quirky San Francisco startup of the same name. All but a few of those thousand projects are now on GitHub.

Some argue that Google had other, more selfish reasons for creating Google Code: It wanted control, or it was working to get as much digital data onto its machines as it could (as the company is wont to do). But ultimately, GitHub was more valuable than any of that. GitHub democratized software development in a more complete way than SourceForge or Google Code or any other service that came before. And that's the most valuable currency in the software development world.

GitHub: Catnip for Coders

After just seven years on the net, GitHub now boasts almost 9 million registered users. Each month, about 20 million others visit without registering. According to web traffic monitor Alexa, GitHub is now among the top 100 most popular sites on earth.

Github offices

Ariel Zambelich/WIRED

Its popularity is remarkable for a site that's typically used by software coders, not people looking for celebrity news, cat videos, or social chatter. "If you look at the top 100 sites," says Brian Doll, GitHub's vice president of strategy, "you've got a handful of social sites, thirty flavors of Google with national footprints, a lot of media outlets—and GitHub."

The irony of GitHub's success, however, is the open source world has returned to a central repository for all its free code. But this time, DiBona—like most other coders—is rather pleased that everything is in one place. Having one central location allows people to collaborate more easily on, well, almost anything. And because of the unique way GitHub is designed, the eggs-in-the-same-basket issue isn't as pressing as it was with SourceForge. "GitHub matters a lot, but it's not like you're stuck there," DiBona says.

While keeping all code in one place, you see, GitHub also keeps it in every place. The paradox shows the beauty of open source software—and why it's so important to the future of technology.

Git Ready

How to explain this paradox? It's all about Git, the "version control" software on which GitHub is based. Linus Torvalds, the creator of Linux, created Git in 2005 as a better way to build Linux. Git made it easy for many people to work on the same Linux code at the same time—without stepping on each other's toes.

In short, Git let anyone readily download a copy of the Linux source code to their own machine, make changes, and then, whenever they felt like it, upload those changes back to the central Linux repository. And it did this in a way that everyone's changes would merge seamlessly together. "This is the genius of Git," DiBona says. "And GitHub's genius is that they understood it."

GitHub created a site where any other software project could operate much like the Linux project—a site the average coder could easily grasp. "GitHub is just really smooth," says Rob "CmdrTaco" Malda, who lived through the open source revolution as the editor-in-chief of the tech site Slashdot. "It's a sexy, modern interface."

Now, pretty much everyone hosts their open source projects on GitHub, including Google, Facebook, Twitter, and even Microsoft—once the bete noire of open source software. In recent months, as Microsoft open sourced some of its most important code, it used GitHub rather than its own open source site, CodePlex.

S. "Soma" Somasegar—the 25-year Microsoft veteran who oversees the company’s vast collection of tools for software developers—says CodePlex will continue to operate, as will other repositories like Sourceforge and BitBucket. "We want to make sure it continues being there, as a choice," he tells WIRED. But he sees GitHub as the only place for a project like Microsoft .NET. "We want to meet developers where they are," he says. "The open source community, for the most part, is on GitHub."

Private Meets Public

And yet, thanks to what DiBona calls the "genius of Git," the community also operates off of GitHub. Thanks to Git, coders can not only move code onto their own machines as they work on particular projects, but can easily "fork" code as well, creating new and separate projects. They can keep some code private while publicly exposing the rest on GitHub. Or have nothing private at all.

Github offices

Ariel Zambelich/WIRED

Git and GitHub, you see, aren't just for open source software. They're also for private code. You can easily move code from private to public and back again. You can do your own thing, but also draw on the power the collective. That's genius of open source.

Google does all this. Go, the company's new-age programming language, is housed on GitHub, and it's entirely public. A project called Kartes sits in a private GitHub repo, but then it feeds a public project called Kubernetes. The Chrome browser sits on a private Git service inside Google.

At Microsoft, the system works much the same. Internally, the company uses Git via tools like Visual Studio and Team Foundation Server. But it also shares code publicly on GitHub. And in offering tools like Visual Studio and Team Foundation Server to the world at large, Microsoft is among those pushing Git into a world of other businesses. Somasegar estimates that about 20 percent of Microsoft's customers now use Git in some way.

Developers Are People

What's more, the community of software developers is no longer small. These are the people who now run the world—quite literally. Of GitHub's ranking in the top 100, Doll says, "What that tells me is that software is becoming as important as the written word."

The community of developers has become so large that GitHub is now struggling to offer tools that can accommodate activity on its largest projects, says Google engineer Igor Minar, who helps oversee the open source Angular project, which is hosted on GitHub and involves tens of thousands of coders.

Developers are everywhere. So many of them are on GitHub. And on GitHub, they're contributing to tens of millions of open source projects. Minar describes the site as a kind of bazaar that offers just about any piece of code you might want—and so much of it free. "If you need something, you just go to GitHub," he says. "You will find it there." In short, open source has arrived. And, ultimately, that means we can build and shape and improve our world far more quickly than before.

#####EOF##### Transportation | WIRED

transportation

Latest Videos

More Videos
Transportation

See the Gear NASCAR Teams Take On the Road

Kevin Harris from Joe Gibbs Racing shows off all the gear NASCAR teams take on the road to support their cars and drivers. From shape-shifting pit boxes to haulers with 17,000 lb. loads, find out what the pit crews, mechanics and staff travel with every week.

#####EOF##### Of Course Everyone's Already Using the Leaked NSA Exploits | WIRED
Of Course Everyone's Already Using the Leaked NSA Exploits

Of Course Everyone's Already Using the Leaked NSA Exploits

NSA/WIRED

Of Course Everyone's Already Using the Leaked NSA Exploits

NSA/WIRED

Last week, an anonymous group calling itself the Shadow Brokers leaked a bunch of National Security Agency hacking tools. Whoever they are, the Shadow Brokers say they still have more data to dump. But the preview has already unleashed some notable vulnerabilities, complete with tips for how to use them.

All of which means anyone—curious kids, petty criminals, trolls—can now start hacking like a spy. And it looks like they are.

Curious to learn if anyone was indeed trying to take advantage of the leak, Brendan Dolan-Gavitt—a security researcher at NYU—set up a honeypot. On August 18 he tossed out a digital lure that masqueraded as a system containing one of the vulnerabilities. For his experiment, Dolan-Gavitt used a Cisco security software bug from the leak that people have learned to fix with workarounds, but that doesn't have a patch yet.

Within 24 hours Dolan-Gavitt saw someone trying to exploit the vulnerability, with a few attempts every day since. "I’m not surprised that someone tried to exploit it," Dolan-Gavitt says. Even for someone with limited technical proficiency, vulnerable systems are relatively easy to find using services like Shodan, a search engine of Internet-connected systems. "People maybe read the blog post about how to use the particular tool that carries out the exploit, and then either scanned the Internet themselves or just looked for vulnerable systems on Shodan and started trying to exploit them that way," Dolan-Gavitt says. He explains that his honeypot was intentionally very visible online and was set up with easily guessable default passwords so it would be easy to hack.

The findings highlight one of the potential risks that come with hoarding undisclosed vulnerabilities for intelligence-gathering and surveillance. By holding on to bugs instead of disclosing them so they can be patched, spy agencies like the NSA create a potentially dangerous free-for-all if their exploits are exposed.

Companies like Cisco, Juniper, and Fortigate, which had products affected by the Shadow Brokers leak, scrambled for days to patch the bugs or offer workarounds. But even if a patch exists, people have to install it. A leak like this calls attention to particular bugs, putting systems with the vulnerabilities at high risk for being targeted. "Once these zero days are exposed, there's a very small window that you have in order to address those vulnerabilities or exposures," says David Kennedy, CEO of TrustedSec, who formerly worked at the NSA and with the Marine Corps' signal intel unit. "There are a number of groups that actively scan the Internet looking for exposures and vulnerabilities so they can get their own access—everything from organized crime to hacker groups to people who are doing ransomware techniques."

The data in last week's Shadow Brokers leak has been definitively linkedto the National Security Agency, but speculation continues about how it got out and who leaked it. Maybe someone inside the NSA stole the code. Maybe a nation state like Russia hacked the agency.

Whether you agree with the agency's overarching mission or not, it is clear that there is danger and collateral damage when guarded exploits leak. Intelligence officials told The Week on August 19 that the NSA knows that outsiders sometimes steal its exploits. "It's kind of dangerous," Kennedy says, "because the NSA had these capabilities, which I believe they definitely should have, but when an exploit is discovered I think they should work on responsible disclosure with the affected parties."

The irony is that incredibly clever and sophisticated exploits that potentially cost millions of dollars to develop can end up in the hands of the masses and wreak havoc. As Dolan-Gavitt puts it, "Now bored teenagers can use them."

#####EOF##### This Scaled-Down Armored Truck Could Be the Next Humvee | WIRED
This Scaled-Down Armored Truck Could Be the Next Humvee

This Scaled-Down Armored Truck Could Be the Next Humvee

This Scaled-Down Armored Truck Could Be the Next Humvee

During the wars in Iraq and Afghanistan, the Department of Defense figured out the Humvee—its multi-purpose troop transport vehicle, designed in the 1980s when everyone thought the US would be fighting the Soviets across Europe—was woefully ill-equipped to deal with the type of asymmetric warfare American soldiers faced in the Middle East.

Humvees, produced by contractor AM General, weren't really designed as combat vehicles, and offer little protection to occupants against improvised explosive devices and rocket-propelled grenades. Since those proved to be major threats in Iraq and Afghanistan, the military hurriedly ordered armor upgrades that could be fitted to existing Humvees, but ruined its valuable off-road capabilities. It put more money into large, heavy, and expensive mine-resistant ambush protected (MRAP) vehicles, which are hugely successful at protecting occupants but too big for many mission profiles.

Now, with the war in Iraq over (sort of) and combat in Afghanistan winding down, the DoD can spend its time and money on a new, major acquisition: the Joint Light Tactical Vehicle (JLTV), the machine that will replace the venerable but outdated Humvee.

One of the frontrunners going after the $9.4 billion contract to design and produce that replacement is the Wisconsin-based Oshkosh Corporation, which calls its vehicle the Light Combat Tactical All-Terrain Vehicle. The L-ATV (Oshkosh is fluent in acronym-obsessed military parlance) is the faster little brother to its popular MRAP, the M-ATV. "Future battlefields will have an unpredictable level of terrain and tactics and threats," says John Bryant, senior vice president of defense programs for Oshkosh Defense. "Troops require an all-terrain vehicle that's scalable, net-ready, that performs off road, and is highly reliable."

The JLTV in action.

Oshkosh Defense

It's easy to make a vehicle that's small and fast, but with limited protective capabilities. It's easy to make a big vehicle that is slower but keeps everyone really safe. The goal of the JLTV is to provide MRAP-levels of protection and Humvee-like maneuverability. Oshkosh wanted to take all the protection offered by the MRAP and shrink it down to something much smaller, with better off-road capabilities and the ability to be transported more easily by air and sea.

"M-ATV is really the benchmark of off-road protected mobility right now," says Bryant. "We had the opportunity to refine our Core1080 integrated protection system so that we could provide that level of protection on a much smaller vehicle." The L-ATV is approximately 30 percent smaller than the M-ATV, so maintaining the same level of protection even on the lighter vehicle is no small feat.

"The M-ATV provided great off-road mobility and survivability, but we sort of did it through mass," explains Bryant. With the L-ATV, Oshkosh "optimized every single component" for survivability, allowing the company to offer the same protection in a smaller platform. It helps that the military hasn’t rushed the JLTV design process, as it did with that of the MRAP, which was developed under an urgent deadline for a specific, in-theater threat.

"Every time we come up with a new level of protected mobility off road, the first thing warfighting customers around the world say is: 'That's awesome, now can you make it even smaller?'"

Oshkosh Defense

The JLTV program has a much wider range of requirements than the MRAP. Key requirements include survivability, transportability and multi-purpose needs for a wide-variety of scenarios. Taking big truck capabilities and putting them in a smaller, faster, more maneuverable vehicle was the goal.

The L-ATV includes a bunch of neat technology too. A computer-controlled independent suspension system allows for 20 inches of wheel travel to improve off-road performance and allow it to park in confined spaces like amphibious ships. While there’s enough on-board power to supply all the computers and sensors stuffed into the modern fighting vehicle, there's an optional diesel-electric hybrid system that can provide 70 kilowatts of on-board and exportable power for external operations.

The curb weight of the L-ATV is under 14,000 pounds, with an additional 4,000 pounds added in gear and soldiers. That's half the weight of an MRAP, and light enough so that two can be sling-loaded underneath a helicopter for air transport.

Oshkosh wouldn't reveal the L-ATV's top speed or performance specs for competitive reasons: This winter, the Pentagon will assess its final proposals against two other finalists, Lockheed Martin and AM General, with a final award coming sometime next summer. The contract is to build 17,000 vehicles across the first eight years of the program. That works out to more than $550,000 per JLTV, including delivery, add-on kits, logistics, technical manuals, interim support, and everything else that goes along with a major program like this.

Once the DoD picks its JLTV for the future, there will be a slow ramp-up as the military does its own testing, including live fire and other operational testing, with troops in the field getting their first crack at the new vehicle around 2018. But Oshkosh says it could begin production almost immediately, if the need arose. "We can be producing a thousand a month of these a few months from now if the requirement changed and became more urgent," says Bryant. "We could ramp this up right now, if need be."

#####EOF##### WIRED Videos Skip to main content

WIRED25

Events Amazon CEO Jeff Bezos spoke with WIRED’s Steven Levy as part of WIRED25, WIRED’s 25th anniversary celebration in San Francisco.
Events Instagram Cofounder Kevin Systrom spoke with WIRED's Lauren Goode as part of WIRED25, WIRED’s 25th anniversary celebration in San Francisco.
Business Twitter and Square Cofounder and CEO Jack Dorsey spoke with WIRED’s Editor-in-Chief Nicholas Thompson as part of WIRED25, WIRED’s 25th anniversary celebration in San Francisco.
Events Salesforce Chairman and Co-CEO Marc Benioff spoke with WIRED’s Adam Rogers as part of WIRED25, WIRED’s 25th anniversary celebration in San Francisco.
Business Google CEO Sundar Pichai spoke with WIRED’s Steven Levy as part of WIRED25, WIRED’s 25th anniversary celebration in San Francisco.
Business Microsoft CEO Satya Nadella and Chief Accessibility Officer Jenny Lay-Flurrie spoke with WIRED’s Editor-in-Chief Nicholas Thompson as part of WIRED25, WIRED’s 25th anniversary celebration in San Francisco.
Business WIRED editor-in-chief Nicholas Thompson spoke with Jeff Weiner, CEO of LinkedIn about the future of work at WIRED's 25th anniversary celebration in San Francisco.
Business WIRED editor-in-chief Nicholas Thompson spoke with Stacy Brown-Philpot, CEO of TaskRabbit, about the future of work at WIRED's 25th anniversary celebration in San Francisco.
More
#####EOF##### How the World's First Computer Was Rescued From the Scrap Heap | WIRED
How the World's First Computer Was Rescued From the Scrap Heap

How the World's First Computer Was Rescued From the Scrap Heap

How the World's First Computer Was Rescued From the Scrap Heap

Eccentric billionaires are tough to impress, so their minions must always think big when handed vague assignments. Ross Perot’s staffers did just that in 2006, when their boss declared that he wanted to decorate his Plano, Texas, headquarters with relics from computing history. Aware that a few measly Apple I's and Altair 880's wouldn’t be enough to satisfy a former presidential candidate, Perot’s people decided to acquire a more singular prize: a big chunk of ENIAC, the "Electronic Numerical Integrator And Computer." The ENIAC was a 27-ton, 1,800-square-foot bundle of vacuum tubes and diodes that was arguably the world’s first true computer. The hardware that Perot’s team diligently unearthed and lovingly refurbished is now accessible to the general public for the first time, back at the same Army base where it almost rotted into oblivion.

ENIAC was conceived in the thick of World War II, as a tool to help artillerymen calculate the trajectories of shells. Though construction began a year before D-Day, the computer wasn’t activated until November 1945, by which time the U.S. Army’s guns had fallen silent. But the military still found plenty of use for ENIAC as the Cold War began—the machine’s 17,468 vacuum tubes were put to work by the developers of the first hydrogen bomb, who needed a way to test the feasibility of their early designs. The scientists at Los Alamos later declared that they could never have achieved success without ENIAC’s awesome computing might: the machine could execute 5,000 instructions per second, a capability that made it a thousand times faster than the electromechanical calculators of the day. (An iPhone 6, by contrast, can zip through 25 billion instructions per second.)

When the Army declared ENIAC obsolete in 1955, however, the historic invention was treated with scant respect: its 40 panels, each of which weighed an average of 858 pounds, were divvied up and strewn about with little care. Some of the hardware landed in the hands of folks who appreciated its significance—the engineer Arthur Burks, for example, donated his panel to the University of Michigan, and the Smithsonian managed to snag a couple of panels for its collection, too. But as Libby Craft, Perot’s director of special projects, found out to her chagrin, much of ENIAC vanished into disorganized warehouses, a bit like the Ark of the Covenant at the end of Raiders of the Lost Ark.

Lost in the bureaucracy

“As time went on, new people would come in and the storage records they got probably weren’t as good as they should have been,” says Craft, who was the person most responsible for tracking down what remained of ENIAC. “And so when they’d need more space, they’d look at this hunk of metal that they didn’t know anything about. And they’d go ahead and dispose of it.”

Craft was on the verge of ending her search when an Army functionary dug up documents indicating that some panels had once been shipped from the Aberdeen (MD) Proving Ground to Oklahoma’s Fort Sill, home to the Army’s field artillery museum. When Craft contacted Fort Sill to inquire, the museum’s curator was stunned to discover that he did, indeed, possess the world’s largest trove of ENIAC hardware—nine panels in total, all stored in anonymous wooden crates that hadn’t been pried open in years. Fort Sill officials are unclear as to how they ended up with nearly a quarter of ENIAC, pieces of which also came to Oklahoma from the Anniston (AL) Army Depot.

An ENIAC technician changes a tube.

US Army

Craft struck a deal to borrow eight of Fort Sill’s panels in exchange for a promise to restore the hardware to some semblance of its former glory. The restoration project was assigned to Dan Gleason, a video-conferencing engineer at Perot Systems who had zero experience with fixing vintage computers. Gleason realized early on that he couldn’t make his portion of ENIAC run actual calculations—such an endeavor would require all 40 panels, not to mention thousands of new components and technical know-how that had long been forgotten. But he resolved to make the computer at least appear like it was hard at work figuring out the best flight paths for howitzer shells.

Restoration and the return home

The first step for Gleason was to address the panels’ cosmetic deficiencies; the exterior metal was badly rusted. (One of the eight panels was so water damaged, in fact, that it couldn’t be salvaged.) Gleason sandblasted the panels, then coated them with black wrinkle paint that he procured from dozens of auto-body shops. Once the paint dried, Gleason and his son, Jonathan, laboriously soldered 600 new lamp bulbs into place. Those bulbs were then connected to a motion sensor, so they would flash in random order when an observer approaches. Gleason also fabricated a massive steel frame that prevents the panels from tipping over and crushing the protruding vacuum tubes on its sides (not to mention unfortunate passers-by).

The revamped ENIAC went on display at Perot’s office building in 2007, but relatively few people had the chance to see it; the building is a secure facility that doesn’t welcome the general public, though a few computing nerds were able to arrange special tours. But Perot’s company, which was purchased by Dell in 2009, recently announced that it will soon be moving to new digs, so the time seemed right to return the panels to Fort Sill. The 6,864-pounds’-worth of computing history, encased in mounds of bubble wrap, made its way back to Oklahoma in late September. Because Dan Gleason had the foresight to wire the panels’ lights using simple spade connectors and an off-the-shelf 12-channel DMX controller, the Fort Sill museum had little trouble getting ENIAC back in working order. The toughest part was piecing together Gleason’s steel frame, which was more elaborate than museum officials had anticipated.

The ENIAC panels went on display at Fort Sill in late October, though some more restoration work remains to be done. The museum is in the process of obtaining a few new vacuum tubes, for example, to give the unit an even more authentic appearance. The panels will never be able to run any bona fide calculations, of course, but that’s probably for the best. Even in its heyday, ENIAC required a whopping 30 milliseconds to figure out the square root of a complicated number. Who has the patience for such long waits nowadays?

The Fort Sill Field Artillery Museum is open from 9 a.m. to 5 p.m., Tuesday through Saturday. Admission is free, but visitors over the age of 15 will need to show a valid photo ID to enter the base.

#####EOF##### The Inside Story of Mt. Gox, Bitcoin's $460 Million Disaster | WIRED
The Inside Story of Mt. Gox, Bitcoin's $460 Million Disaster

The Inside Story of Mt. Gox, Bitcoin's $460 Million Disaster

Mark Karpeles, the chief executive officer of bitcoin exchange Mt. Gox, center, is escorted as he leaves the Tokyo District Court this past Friday.
Photo: Tomohiro Ohsumi/Bloomberg via Getty Images

The Inside Story of Mt. Gox, Bitcoin's $460 Million Disaster

Mark Karpeles, the chief executive officer of bitcoin exchange Mt. Gox, center, is escorted as he leaves the Tokyo District Court this past Friday.
Photo: Tomohiro Ohsumi/Bloomberg via Getty Images

From a distance, the world's largest bitcoin exchange looked like a towering example of renegade entrepreneurism. But on the inside, according to some who were there, Mt. Gox was a messy combination of poor management, neglect, and raw inexperience.

Its collapse into bankruptcy last week – and the disappearance of $460 million, apparently stolen by hackers, and another $27.4 million missing from its bank accounts – came as little surprise to people who had knowledge of the Tokyo-based company's inner workings. The company, these insiders say, was largely a reflection of its CEO and majority stake holder, Mark Karpeles, a man who was more of a computer coder than a chief executive and yet was sometimes distracted even from his technical duties when they were most needed. "Mark liked the idea of being CEO, but the day-to-day reality bored him," says one Mt. Gox insider, who spoke on condition of anonymity.

Last week, after a leaked corporate document said that hackers had raided the Mt. Gox exchange, Karpeles confirmed that a huge portion of the money controlled by the company was gone. "We had weaknesses in our system, and our bitcoins vanished. We've caused trouble and inconvenience to many people, and I feel deeply sorry for what has happened," Karpeles said, speaking at a Tokyo press conference called to announce the company's bankruptcy. This would be the second time the exchange was hacked. In June 2011, attackers lifted the equivalent of $8.75 million.

Bitcoin promises to give a bank account to anyone with a mobile phone, no ID required. It's clearly an amazing and potentially world-changing technology – the first viable, decentralized, reliable form of digital cash. It could democratize international finance. But it's also a technology that was pushed forward by a community of people who were unprepared or unwilling to deal with even the basics of everyday business. A new wave of entrepreneurs may bring the digital currency a new level of respectability, but over its first several years, bitcoin has been driven largely by computer geeks with little experience in the financial world. The most prominent example is Mark Karpeles.

The Mt. Gox offices in Tokyo.

Photo: Ariel Zambelich/WIRED

The King of Bitcoin ——————-

The 28-year-old Karpeles was born in France, but after spending some time in Israel, he settled down in Japan. There he got married, posted cat videos and became a father. In 2011, he acquired the Mt. Gox exchange in from an American entrepreneur named Jed McCaleb.

McCaleb had registered the Mtgox.com web domain in 2007 with the idea of turning it into a trading site for the wildly popular Magic: The Gathering game cards. He never followed through on that idea, but in late 2010, McCaleb decided to repurpose the domain as a bitcoin exchange. The idea was simple: he'd provide a single place to connect bitcoin buyers and sellers. But soon, McCaleb was getting wires for tens of thousands of dollars and, realizing he was in over his head, he sold the site to Karpeles, an avid programmer, foodie, and bitcoin enthusiast who called himself Magicaltux in online forums.

Karpeles soon set about rewriting the site's back-end software, eventually turning it into the world's most popular bitcoin exchange. A June 2011 hack took the site offline for several days, and according to bitcoin enthusiasts Jesse Powell and Roger Ver, who helped the company respond to the hack, Karpeles was strangely nonchalant about the crisis. But he and Mt. Gox eventually made good on their obligations, earning a reputation as honest players in the bitcoin community. Other bitcoin companies had been hacked and lost customer funds. Most of the time, they simply folded. But Karpeles and Mt. Gox did not.

"He likes to be praised, and he likes to be called the king of bitcoin"
–Mt. Gox insiderAs bitcoin prices took off, jumping from $13 at the start of 2013 to more than $1,200 at its peak, Karpeles, as Mt. Gox's largest stake holder, appeared to become an extremely wealthy man. Mt. Gox did not offer company equity to employees, and by the time of the most recent hack, the company had squirreled away more than 100,000 bitcoins, or $50 million. Karpeles owns 88 percent of the company and McCaleb 12 percent, according to a leaked Mt. Gox business plan.

When Karpeles was interviewed by Reuters in the spring of 2013 – seated, inexplicably, on top of a blue pilates ball – he was a major player in the bitcoin world. He had ponied up 5,000 bitcoins to help kickstart the Bitcoin Foundation, a not-for-profit bitcoin software development and lobbying group, where he was a board member (he has since resigned). And, according to insiders, he thought nothing of dropping the business of the day to order flat screen TVs or $400 lunches for the staff of Gox's expanded Tokyo headquarters, which now occupies three floors of a modern office building in the city's Shibuya neighborhood. "He likes to be praised, and he likes to be called the king of bitcoin," says another insider who spoke on condition of anonymity. "He always talks about how he's a member of Mensa and has an above-average IQ."

Citizen Karpeles —————-

But beneath it all, some say, Mt. Gox was a disaster in waiting. Last year, a Tokyo-based software developer sat down in Gox's first-floor meeting room to talk about working for the company. "I thought it was going to be really awesome," says the developer, who also spoke on condition of anonymity. Soon, however, there were some serious red flags.

Mt. Gox, he says, didn't use any type of version control software – a standard tool in any professional software development environment. This meant that any coder could accidentally overwrite a colleague's code if they happened to be working on the same file. According to this developer, the world's largest bitcoin exchange had only recently introduced a test environment, meaning that, previously, untested software changes were pushed out to the exchanges customers – not the kind of thing you'd see on a professionally run financial services website. And, he says, there was only one person who could approve changes to the site's source code: Mark Karpeles. That meant that some bug fixes – even security fixes – could languish for weeks, waiting for Karpeles to get to the code. "The source code was a complete mess," says one insider.

The unfinished site of the Bitcoin Cafe.

By the fall of 2013, Mt. Gox's business was also a mess. Federal agents had seized $5 million from the company's U.S. bank account, because the company had not registered with the government as a money transmitter, and Mt. Gox was being sued for $75 million by a former business partner called CoinLab. U.S. customers complained of months-long delays withdrawing dollars from the exchange, and Mt. Gox had tumbled from the world's number one bitcoin exchange to position number three.

But Karpeles was obsessed with a new project: The Bitcoin Cafe. Inspired by a French bistro, it would be a stylish hang-out located in the same building as the Mt. Gox offices, a very-new-looking building of metal and glass within walking distance of Tokyo's largest train station. You could drop by for a beer or some wine, and – using a cash register proudly hacked by Mark Karpeles – you could buy it all with bitcoin. When WIRED tried to meet with Karpeles and Mt. Gox at their offices this past October – and a company representative turned us away, saying that legal reasons prevented Mt. Gox from talking to the press – the placard in the lobby of the building already identified the cafe. This company representative said it would open by the end of the year. It never did.

One insider says that Mt. Gox spent the equivalent of $1 million on the cafe venture, renovating Mt. Gox's office building to Karepeles' specifications. At a time when Gox's business was falling apart, this insider says, the project was a major distraction. "[Karpeles] was super-proud of being able to use his hacked cash register with the code he wrote," this insider says.

Says another insider: "Aside from the cafe, he liked to spend time fixing servers, setting up networks and installing gadgets... probably distracting himself from dealing with the real issues that the company was up against."

Then, in February, the company's fortunes took another turn. Mt. Gox stopped paying out customers in bitcoins, citing a flaw in the digital currency, and after days of silence from the company, protesters turned up outside its offices, asking whether it was insolvent.

Years-Long Hack —————

According to a leaked Mt. Gox document that hit the web last week, hackers had been skimming money from the company for years. The company now says that it's out a total of 850,000 bitcoins, more than $460 million at Friday's bitcoin exchange rates. When bitcoin enthusiast Jesse Powell heard this, he was reminded of June 2011.

After Mt. Gox was hacked for the first time in summer of 2011, a friend asked Powell to help out, and soon, the San Francisco entrepreneur found himself on a plane to Tokyo. After landing, he rushed to Shibuya station, where he was met by his friend, Roger Ver, one of the world's biggest bitcoin supporters who just happened to live across the street from Mt. Gox. Without bothering to drop off Powell's bags, the two rushed to the Mt. Gox offices to see what they could do. They worked through the week with Karpeles, other employees, and a handful of other bitcoin enthusiasts. They answered support inquires, did troubleshooting on the site, and tried to support the tiny company in any way they could. At one point, Powell rushed to the Apple store and came back with $5,000 worth of computers that could support the cause. But two days later, the site was still offline.

Ver and Powell were set to work through the weekend, but when they arrived at the company's tiny office that Saturday, there was a surprise. Mark Karpeles had decided to take the weekend off. The two volunteers were flabbergasted. "I thought that was completely insane and demoralizing for the rest of the team," Powell remembers. On Monday, Powell says, Karpeles did return to work, but he spent part of the day stuffing envelopes. "I was like: 'Dude why are you doing this? You can do this anytime. The site is offline. You need to get the site online.'"

Powell last met with Karpeles in January, before news of the latest hack broke. He now runs a competitor to Mt. Gox called Kraken. They had lunch in Tokyo, and Karpeles seemed unworried about Gox's future. He was excited about his Bitcoin Cafe. "It was probably some light for them in a very dark world of dealing with banks and customer complaints all day," Powell says. "I'm sure that Mark has been very stressed for a long time and probably the Bitcoin Cafe was a fun project." But now that world is even darker.

#####EOF##### Machine Learning Can Create Fake ‘Master Key’ Fingerprints | WIRED
Machine Learning Can Create Fake ‘Master Key’ Fingerprints

Machine Learning Can Create Fake ‘Master Key’ Fingerprints

Getty Images

Machine Learning Can Create Fake ‘Master Key’ Fingerprints

Getty Images

Just like any lock can be picked, any biometric scanner can be fooled. Researchers have shown for years that the popular fingerprint sensors used to guard smartphones can be tricked sometimes, using a lifted print or a person's digitized fingerprint data. But new findings from computer scientists at New York University's Tandon School of Engineering could raise the stakes significantly. The group has developed machine learning methods for generating fake fingerprints—called DeepMasterPrints—that not only dupe smartphone sensors, but can successfully masquerade as prints from numerous different people. Think of it as a skeleton key for fingerprint-protected devices.

The work builds on research into the concept of a "master print" that combines common fingerprint traits. In initial tests last year, NYU researchers explored master prints by manually identifying various features and characteristics that could combine to make a fingerprint that authenticates multiple people. The new work vastly expands the possibilities, though, by developing machine learning models that can churn out master prints.

"Even if a biometric system has a very low false acceptance rate for real fingerprints, they now have to be fine-tuned to take into account synthetic fingerprints, too," says Philip Bontrager, a PhD candidate at NYU who worked on the research. "Most systems haven’t been hardened against an artificial fingerprint attack, so it’s something on the algorithmic side that people designing sensors have to be aware of now."

The research capitalizes on the shortcuts that mobile devices take when scanning a user's fingerprint. The sensors are small enough that they can only "see" part of your finger at any given time. As such, they make some assumptions based on a snippet, which also means that fake fingerprints likely need to satisfy fewer variables to trick them.

The researchers trained neural networks on images of real fingerprints, so the system could begin to output a variety of realistic snippets. Then they used a technique called "evolutionary optimization" to assess what would succeed as a master print—with every characteristic as familiar and convincing as possible—and guide the output of the neural networks.

The researchers then tested their synthetic fingerprints against the popular VeriFinger matcher—used in a number of consumer and government fingerprint authentication schemes worldwide—and two other commercial matching platforms, to see how many identities their synthetic prints matched with.

"Most systems haven’t been hardened against an artificial fingerprint attack."

Philip Bontrager, NYU

Fingerprint matchers can be set with different levels of security in mind. A top secret weapons facility would want the lowest possible chance of a false positive. A regular, consumer smartphone would want to keep obvious frauds out, but not be so sensitive that it frequently rejects the actual owner. Against a moderately stringent setting, the researcher team's master prints matched with anywhere from two or three percent of the records in the different commercial platforms up to about 20 percent, depending on which prints they tested.

Overall, the master prints got 30 times more matches than the average real fingerprint—even at the highest security settings, where the master prints didn't perform particularly well. Think of a master print attack, then, like a password dictionary attack, in which hackers don't need to get it right in one shot, but instead systematically try common combinations to break into an account.

The researchers note that they did not make capacitive printouts or other replicas of their machine learning-generated master prints, which means they didn't attempt to unlock real smartphones. Anil Jain, a biometrics researcher at Michigan State University who did not participate in the project, sees that as a real shortcoming; it's hard to extrapolate the research out to an actual use case. But he says the strength of the work is in the machine learning techniques it developed. "The proposed method works much better than the earlier work," Jain says.

The NYU researchers plan to continue refining their methods. They hope to raise awareness in the biometrics industry about the importance of defending against synthetic readings. They suggest that developers should start testing their devices against synthetic prints as well as real ones to make sure the proprietary systems can spot phonies. And the group notes that it has only begun to scratch the surface of understanding how exactly master prints succeed in tricking scanners. It's possible that sensors could increase their fidelity or depth of analysis in order to defeat master prints.

"Even as these synthetic measures get better and better, if you’re paying attention to it you should be able to design systems that are at higher and higher resolution and aren’t easily attacked," Bontrager says. "But it will affect cost and design."


More Great WIRED Stories

#####EOF##### WIRED Magazine & Digital Subscription – Offers
#####EOF##### Photo | WIRED

Latest Videos

More Videos
Science

Why Averaging 95% From the Free-Throw Line is Almost Impossible

The very best basketball free throw shooters can sink the ball about 90 percent of the time. What would it take to get to 95 percent? WIRED's Robbie Gonzalez steps up to the foul line with top shooter Steve Nash to find out.

#####EOF##### Stealing Data From Computers Using Heat | WIRED
Stealing Data From Computers Using Heat

Stealing Data From Computers Using Heat

Cultura Science/Getty Images

Stealing Data From Computers Using Heat

Cultura Science/Getty Images

Air-gapped systems, which are isolated from the Internet and are not connected to other systems that are connected to the Internet, are used in situations that demand high security because they make siphoning data from them difficult.

Air-gapped systems are used in classified military networks, the payment networks that process credit and debit card transactions for retailers, and in industrial control systems that operate critical infrastructure. Even journalists use them to prevent intruders from remotely accessing sensitive data. To siphon data from an air-gapped system generally requires physical access to the machine, using removable media like a USB flash drive or a firewire cable to connect the air-gapped system directly to another computer.

But security researchers at Ben Gurion University in Israel have found a way to retrieve data from an air-gapped computer using only heat emissions and a computer's built-in thermal sensors. The method would allow attackers to surreptitiously siphon passwords or security keys from a protected system and transmit the data to an internet-connected system that's in close proximity and that the attackers control. They could also use the internet-connected system to send malicious commands to the air-gapped system using the same heat and sensor technique.

In a video demonstration produced by the researchers, they show how they were able to send a command from one computer to an adjacent air-gapped machine to re-position a missile-launch toy the air-gapped system controlled.

The proof-of-concept attack requires both systems to first be compromised with malware. And currently, the attack allows for just eight bits of data to be reliably transmitted over an hour—a rate that is sufficient for an attacker to transmit brief commands or siphon a password or secret key but not large amounts of data. It also works only if the air-gapped system is within 40 centimeters (about 15 inches) from the other computer the attackers control. But the researchers, at Ben Gurion's Cyber Security Labs, note that this latter scenario is not uncommon, because air-gapped systems often sit on desktops alongside Internet-connected ones so that workers can easily access both.

The method was developed by Mordechai Guri in a project overseen by his adviser Yuval Elovici. The research represents just a first step says Dudu Mimran, chief technology officer at the lab, who says they plan to present their findings at a security conference in Tel Aviv next week and have released a paper describing their work (.pdf).

"We expect this pioneering work to serve as the foundation of subsequent research, which will focus on various aspects of the thermal channel and improve its capabilities," the researchers note in their paper. With additional research, they say they may be able to increase the distance between the two communicating computers and the speed of data transfer between them.

In their video demonstration, they used one computer tower to initiate a command to an adjacent computer tower representing an air-gapped system. But future research might involve using the so-called internet of things as an attack vector—an internet-connected heating and air conditioning system or a fax machine that's remotely accessible and can be compromised to emit controlled fluctuations in temperature.

How It Works

Computers produce varying levels of heat depending on how much processing they're doing. In addition to the CPU, the graphics-processing unit and other motherboard components produce significant heat as well. A system that is simultaneously streaming video, downloading files and surfing the internet will consume a lot of power and generate heat.

To monitor the temperature, computers have a number of built-in thermal sensors to detect heat fluctuations and trigger an internal fan to cool the system off when necessary or even shut it down to avoid damage.

The attack, which the researchers dubbed BitWhisper, uses these sensors to send commands to an air-gapped system or siphon data from it. The technique works a bit like Morse code, with the transmitting system using controlled increases of heat to communicate with the receiving system, which uses its built-in thermal sensors to then detect the temperature changes and translate them into a binary "1" or "0."

To communicate a binary "1" in their demonstration for example, the researchers increased the heat emissions of the transmitting computer by just 1 degree over a predefined timeframe. Then to transmit a "0" they restored the system to its base temperature for another predefined timeframe. The receiving computer, representing the air-gapped system, then translated this binary code into a command that caused it to reposition the toy missile launcher.

The researchers designed their malware to take into consideration normal temperature fluctuations of a computer and distinguish these from fluctuations that signal a system is trying to communicate. And although their malware increased the temperature by just one degree to signal communication, an attacker could increase the temperature by any amount as long as it's within reason, to avoid creating the suspicion that can accompany an overactive computer fan if the computer overheats.

Communication can also be bi-directional with both computers capable of transmitting or receiving commands and data. The same method, for example, could have been used to cause their air-gapped system to communicate a password to the other system.

The malware on each system can be designed to search for nearby PCs by instructing an infected system to periodically emit a thermal ping—to determine, for example, when a government employee has placed his infected laptop next to a classified desktop system. The two systems would then engage in a handshake, involving a sequence of "thermal pings" of +1C degrees each, to establish a connection. But in situations where the internet-connected computer and the air-gapped one are in close proximity for an ongoing period, the malware could simply be designed to initiate a data transmission automatically at a specified time—perhaps at midnight when no one's working to avoid detection—without needing to conduct a handshake each time.

The time it take to transmit data from one computer to another depends on several factors, including the distance between the two computers and their position and layout. The researchers experimented with a number of scenarios—with computer towers side-by-side, back-to-back and stacked on top of each other. The time it took them to increase the heat and transmit a "1" varied between three and 20 minutes depending. The time to restore the system to normal temperature and transmit a "0" usually took longer.

Other Air-Gap Hacking Techniques

This isn't the only way to communicate with air-gapped systems without using physical media. Past research by other teams has focused on using acoustic inaudible channels, optical channels and electromagnetic emissions. All of these, however, are unidirectional channels, meaning they can be used to siphon data but not send commands to an air-gapped system.

The same Ben Gurion researchers previously showed how they could siphon data from an air-gapped machine using radio frequency signals and a nearby mobile phone. That proof-of-concept hack involved radio signals generated and transmitted by an infected machine's video card, which could be used to send passwords and other data over the air to the FM radio receiver in a mobile phone.

The NSA reportedly has been using a more sophisticated version of this technique to not only siphon data from air-gapped machines in Iran and elsewhere but also to inject them with malware, according to documents leaked by Edward Snowden. Using an NSA hardware implant called the Cottonmouth-I, which comes with a tiny embedded transceiver, the agency can extract data from targeted systems using RF signals and transmit it to a briefcase-sized relay station up to 8 miles away.

There's no evidence yet that the spy agency is using heat emissions and thermal sensors to steal data and control air-gapped machines— their RF technique is much more efficient than thermal hacking. But if university researchers in Israel have explored the idea of thermal hacking as an attack vector, the NSA has likely considered it too.

#####EOF##### Russia Linked to Triton Industrial Control Malware | WIRED
Russia Linked to Disruptive Industrial Control Malware

Russia Linked to Disruptive Industrial Control Malware

Sergey Alimov/Getty Images

Russia Linked to Disruptive Industrial Control Malware

Sergey Alimov/Getty Images

In December, researchers spotted a new family of industrial control malware that had been used in an attack on a Middle Eastern energy plant. Known as Triton, or Trisis, the suite of hacking tools is one of only a handful of known cyberweapons developed specifically to undermine or destroy industrial equipment. Now, new research from security firm FireEye suggests that at least one element of the Triton campaign originated from Russia. And the tipoff ultimately came from some pretty boneheaded mistakes.

Russian hackers are in the news for all sorts of activity lately, but FireEye's conclusions about Triton are somewhat surprising. Indications that the 2017 Triton attack targeted a Middle Eastern petrochemical plant fueled the perception that Iran was the aggressor—especially following reports that the victim was specifically a Saudi Arabian target. But FireEye's analysis reveals a very different geopolitical context.

FireEye specifically traced the Triton intrusion malware to Russia's Central Scientific Research Institute of Chemistry and Mechanics, located in the Nagatino-Sadvoniki district of Moscow.

"When we first looked at the Triton incident we had no idea who was responsible for it and that’s actually fairly rare, usually there’s some glaring clue," says John Hultquist, director of research at FireEye. "We had to keep chipping away and let the evidence speak for itself. Now that we’ve associated this capability with Russia we can start thinking about it in the context of Russia’s interests."

King Triton

Triton comprises both malware that infects targets, and a framework for manipulating industrial control systems to gain deeper and deeper control in an environment. Triton attacks seem to set the stage for a final phase in which attackers send remote commands that deliver an end payload. The goal is to destabilize or disable an industrial control system's safety monitors and protection mechanisms so attackers can wreak havoc unchecked. Security researchers discovered the 2017 Triton attack after it failed to successfully skirt those failsafes, leading to a shutdown.

"They made dumb operational security mistakes."

John Hultquist, FireEye

But while the attackers, dubbed TEMP.Veles by FireEye, left few clues about their origins once within those target networks, they were sloppier about concealing themselves while testing the Triton intrusion malware. As FireEye researchers analyzed the incident at the Middle Eastern energy plant and worked backward toward the attackers, they eventually stumbled on a testing environment used by TEMP.Veles that linked the group to the intrusion. The attackers tested and refined malware components beginning at least in 2014 to make them harder for antivirus scanners to detect. FireEye found one of the files from the test environment in the target network.

"They made dumb operational security mistakes, for instance the malware testing," Hultquist says. "They assumed that it wouldn’t be connected to them, because it wasn’t directly tied to the incident—they cleaned up their act for the targeted networks. That’s the lesson we see again and again, these actors make mistakes when they think no one can see them."

Evaluating the testing environment gave FireEye a window into a whole host of TEMP.Veles activities, and they could track how test projects fit in with and mirrored TEMP.Veles's known activity in real victim networks. The group seems to have first been active in the test environment in 2013, and has worked on numerous development projects over the years, particularly customizing open-source hacking tools to tailor them to industrial control settings and make them more inconspicuous.

"Russian government hackers are generally better than leaving a testing environment exposed on the internet."

Jeff Bardin, Treadstone 71

In analyzing the TEMP.Veles malware files, FireEye found one that contained a username that is connected to a Russia-based information security researcher. The moniker appears to represent an individual who was a professor at CNIIHM, the institution connected to the malware. FireEye also found that an IP address associated with malicious TEMP.Veles Triton activity, monitoring, and reconnaissance is registered to CNIIHM. The infrastructure and files FireEye analyzed also contain Cyrillic names and notes, and the group seems to work on a schedule consistent with Moscow's time zone. It's worth noting, however, that numerous cities outside Russia—including Tehran—are in similar timezones.

CNIIHM is a well-resourced Russian government research institution, with expertise in information security and industrial control-focused work. The organization also collaborates extensively with other Russian science, technology, and defense research institutions, all of which makes them a plausible creator of the Triton intrusion malware. FireEye notes that it's possible that rogue CNIIHM employees developed it there secretly, but the firm sees this as very unlikely. FireEye also linked to TEMP.Veles to the Triton intrusion malware specifically, rather than the entire industrial control framework. But Hultquist says the findings strongly indicate that even if a different organization developed each part of Triton, they're connected in some way.

New Paradigm

The FireEye conclusion represents a fundamental rethinking of the 2017 Triton attack, but questions still remain about what the attribution implies. Russia has little incentive to antagonize Saudi Arabia, says Andrea Kendall-Taylor, a former senior intelligence officer currently at the Center for a New American Security think tank. "Moscow's targeting of Saudi Arabia is inconsistent with my understanding of Russia's geopolitical goals," Kendall-Taylor says. "Moreover, Putin probably would like to maintain a good relationship with Saudi to avoid the appearance of entirely siding with Iran."

And while outside researchers say that FireEye's research looks solid, some argue that the execution seems out of step with what one expects from the Kremlin.

"The attackers were very sloppy, that's my only pause. Russian government hackers are generally better than leaving a testing environment exposed on the internet," says Jeff Bardin, the chief intelligence officer of the threat tracking firm Treadstone 71. "Maybe there is an element of denial and deception in the evidence. But maybe the attackers were proving their models and testing things out with new capabilities."

Regardless of the motive and means, though, it appears that Russian hackers have added yet another ambitious attack to their roster. What's less clear, though, is if and when they might try to use it next.


More Great WIRED Stories

#####EOF##### Github's Top Coding Languages Show Open Source Has Won | WIRED
Github's Top Coding Languages Show Open Source Has Won

Github's Top Coding Languages Show Open Source Has Won

Github

Github's Top Coding Languages Show Open Source Has Won

Github

Think of it as a map of the rapidly changing world of computer software.

On Wednesday, Github published a graph tracking the popularity of various programming languages on its eponymous internet service, a tool that lets anyone store, edit, and collaborate on software code. In recent years, Github.com has become the primary means of housing open source software—code that's freely available to the world at large; an increasing number of businesses are using the service for private code, as well. A look at how the languages that predominate on Github have changed over time is a look at how the software game is evolving.

Open source is now mainstream. And the mainstream is now open source.

In particular, the graph reveals just how much open source has grown in recent years. It shows that even technologies that grew up in the years before the recent open source boom are thriving in this new world order—that open source has spread well beyond the tools and the companies typically associated with the movement. Providing a quicker, cheaper, and more comprehensive way of building software, open source is now mainstream. And the mainstream is now open source.

"The previous generation of developers grew up in a world where there was a battle between closed source and open source," says Github's Ben Balter, who helped compile the graphic. "Today, that's no longer true."

Java Everywhere

Case in point: the Java programming language. A decade ago, Java was a language primarily used behind closed doors, something that big banks and other "enterprise" companies used to build all sorts of very geeky, very private stuff. But as GitHub's data shows, it's now at the forefront of languages used to build open source software.

Among new projects started on GitHub, Java is now the second-most popular programming language, up from seventh place in 2008; according to Balter, the increase is driven not by private code repositories but by public (open source) repos. Among private Github repos, he says, Java ranks seventh.

It's now at the forefront of languages used to build open source software.

Why the shift? Java is so well suited to building massive internet services along the lines of Google, Twitter, LinkedIn, Tumblr, and Square, and the economics of the software business dictate that such services run on open source. As Balter also points out, Java's rise is also a result of Google making it the primary language for building apps on Android phones and tablets.

The graph also shows a recent uptick for C#. C# is basically Microsoft's version of Java; in years past, it was even more of a closed-source kind of thing. After all, it was overseen by Microsoft, a company that traditionally kept open source at bay. But as the influence of open source has grown, Microsoft has embraced the movement. It has even open sourced many of the tools used to build and run applications in C#.

Another language on the rise among Githubbers? Swift, Apple's language for building apps on the iPhone, iPad, and the Mac (the language doesn't show up in the graph, but in the raw data GitHub sent to WIRED, it now ranks art number 18 on the list). The reasons for this are different. Swift is on the rise because it's brand new and it's designed for the world's most popular smartphone. But its presence is another nod to growing importance of open source.

Unlike with its previous operating system, you see, Apple has said it will open source Swift, letting anyone modify it so that it will run on more than just the iPhone and the iPad. When Apple opens up, you'll know the world has changed indeed.

#####EOF##### The Massive Work That Goes Into Remodeling an Old Aircraft Carrier | WIRED
The Massive Work That Goes Into Remodeling an Old Aircraft Carrier

The Massive Work That Goes Into Remodeling an Old Aircraft Carrier

Mass Communication Specialist 2nd Class Rusty Pang/U.S. Navy

The Massive Work That Goes Into Remodeling an Old Aircraft Carrier

Mass Communication Specialist 2nd Class Rusty Pang/U.S. Navy

Aircraft carriers are complicated. They’re floating cities and mobile airbases, housing thousands of sailors and airmen, tens of aircraft, multiple nuclear reactors, and their own hospitals, barbershops, chapels, and zip codes. Carriers support defense and humanitarian efforts worldwide and can travel upwards of 100,000 nautical miles each year. Each United States aircraft carrier—there are 10 in active service—is designed to last 50 years. But the only way they get there is with a massive remodeling effort conducted once in the middle of its lifespan to update its technology and infrastructure.

Because “remodeling” is a term more often applied to home kitchens and bathrooms, the multi-year, multi-billion dollar process of modernizing the ship and readying it for at least two more decades of service is called Refueling Complex Overhaul (RCOH).

US Navy sailors and shipyard workers work together to update, clean, and restore nearly every square foot of a carrier: They refuel the nuclear reactors, overhaul living spaces, replace catapult systems used to launch aircraft, and repaint the hull, among other things.

140719-N-WP865-076NEWPORT NEWS, Va. (July 19, 2014) Cmdr. Timothy Tippett, air boss aboard the aircraft carrier USS Abraham Lincoln (CVN 72), watches as an arresting gear engine is installed on the flight deck. Abraham Lincoln is undergoing a refueling complex overhaul at Newport NewsShipbuilding. (U.S. Navy photoby Mass Communication Specialist 3rd Class Brenton Poyser/Released)
Mass Communication Specialist 3rd Class Brenton Poyser/U.S. Navy

Four Nimitz-class aircraft carriers have completed their RCOH since 2001 and USS Abraham Lincoln, commissioned in 1989, is currently undergoing RCOH. During its active service, Lincoln was primarily stationed in the Persian Gulf, including a stint to support Operation Desert Storm in the early 1990s.

In 2013, the ship was placed in drydock in Newport News, Virginia, the same shipyard that laid down its keel in 1984. “We have dozens of shipbuilders that worked on Lincoln during new construction 25 years ago who are working on the RCOH. These shipbuilders have a level of expertise and a bond with the ship that you cannot find anywhere else in the world,” says Bruce Easterson, construction director of Newport News Shipbuilding.

With the help of 2,500 sailors and 3,000 shipyard workers, Lincoln is being methodically overhauled. “During USS Abraham Lincoln’s 44-month RCOH, virtually every space will be touched as the crew, contractors, and the shipyard together pour more than 25 million man hours into our ship,” remarks Captain Ron Ravelo, commanding officer of Lincoln. “The overhaul of USS Abraham Lincoln is nearly 50 percent complete and remains on track for delivery in October 2016 with a fully trained and tested crew,” according to Ravelo.

“We painted the entire hull, replaced the ship's shafting and propellers, and refueled the ship's reactors while in drydock,” explains Easterson. Lincoln’s water-tight doors got a full cleaning, rust removal, and new powder coat paint job before being returned to the ship. In October 2014, workers unexpectedly found that one of Lincoln’s two 30-ton anchors needed to be replaced. A perfect donor anchor was found just a few hundred yards away, from the USS Enterprise, the Navy’s first nuclear carrier deactivated in 2012 and destined for the scrap yard.

“After Lincoln successfully undocked from Newport News Shipbuilding in November 2014, the ship symbolically transitioned from the "rip-out" phase to the "rebuilding" phase,” explains Captain Ravelo. The ship’s living quarters are currently being updated and retrofitted, and the crew will move aboard by May 2016. There’s at least one unexpected bonus for the sailors doing the dirty work, says airman Ehren Bass. “An upside to this job is that it allows you to explore the ship.”

131210-N-XP477-156NEWPORT NEWS, Va. (Dec. 10, 2013) -- Sailors assigned to the Nimitz-class aircraft carrier USS Abraham Lincoln's (CVN 72) decking team, remove the old deck in a compartment with a pneumatic jackhammer. Lincoln is currently undergoing a Refueling and Complex Overhaul (RCOH) at Newport News Shipbuilding, a division of Huntington Ingalls Industries. (U.S. Navy photo by Mass Communication Specialist 3rd Class Danian Douglas/RELEASED)
Mass Communication Specialist 3rd Class Danian Douglas/U.S. Navy
#####EOF##### How to delete your Google search history and stop tracking | WIRED UK

How to delete your Google search history and stop tracking

Take back control of all the personal data Google stores about you with our easy-to-follow security tips


1 day ago
WIRED

Google tracks you on and off the web in a myriad of ways – that's no surprise. But you can wrestle back some level of control. Want to stop Google from knowing anything about you? That’s nigh-on impossible: the advertising giant collects data every time you search the web, every time you visit a website, every time you use your Android phone – you name it, Google is using it to collect data about you. It's the cost of getting so many services without spending any money, but there are ways to limit what Google collects about you.

What Google knows about what you do

There are two ways to get a copy of all the data Google collects on you: Takeout and Dashboard. Takeout was created to let users grab their data from Google and shift it to another service, beginning with photos and contacts but since expanding to Android device settings, Chrome bookmarks, Google Fit activity data, and even your Cloud Print history. Building the Takeout archive can take a few days to create, with Google sending you a link to download when it's ready.

Dashboard was designed with data management in mind, offering a snapshot of the data Google collects about you as you use its services. That includes the number of email exchanges you've had in Gmail, number of files in Drive, and how many photos Google stores for you, but the key information is what Google dubs "activity data", such as your location or searches or browsing history. If you want to freak someone out, show them their location timeline, where Google Maps keeps track of everywhere you go and when, alongside the photos taken that day and travel times down to the minute.

Another source of Google data is your personal profile, held in your Google Account. Under the menu, head to "Personal Info": on this page, you can see what information Google makes public about you and update information such as your photo and birthdate. You can't simply delete this data, but if you want to obfuscate you can of course enter false information – just remember you've done so in case you need that information for password resets.

What Google thinks it knows about you

Google uses the data it collects to build an advertising profile, making its money via ads – Google's parent firm Alphabet posted ad revenue of $32.6 billion last quarter – not by directly selling your data, but through letting companies personalise their advertisements; this is why that pair of trainers you’ve been coveting keep following you around the web. Such behavioural advertising can be more sophisticated than that. Google notes that if you search on Maps for "football fields near me" or watch match highlights on YouTube, it can put two and two together that you're a football fan.

Handily, you can see who Google thinks you are by heading to Ad Settings. There, Google paints a picture of who it thinks you are: your age and gender, what topics you're interested in from air travel to world news, and companies you've visited online.

Eyeing up Takeout, Dashboard, and your personal and advertising profile, you'll get a good sense of the epic mountain of data Google is accumulating about who you are, where you go, and what interests you. If you think Google is leaving something out, or has more data on you it's not willingly revealing, you can also file a subject-access request, which is a right enshrined in EU law to find out what data any organisation holds on you.

How to clean up your Google account settings

Now that you've got a sense of the epic scale of data being collected, it's time to do something about it. Google's default settings favour data collection rather than personal privacy, but the company has made it easier to consider your settings on the Data & Personalisation page with the Privacy Checkup, which walks you through a set of questions regarding your settings.

That includes "activity" settings, profile information and personalised advertising; if you've got a Google account, get a cup of tea (or something stronger) and spend half an hour going through each and every control.

Web & App activity

Web & App activity collects your searches and browsing activity in Google apps such as Chrome as well as apps that use Google Services, such as mapping. That's used to power previous searches and make suggestions; if you turn this off, you'll not see your recent searches or personalised results. Turning this off doesn't block Google from knowing which sites you visit.

Voice

If you talk to your phone or a Google Home device, such as by clicking the microphone icon in Chrome or saying "Okay Google", a record is kept. Google says it uses that data to improve its speech recognition, including to better understand your specific voice. Each clip is accompanied by details of when the recording was made and through which app, such as Chrome or the Android Google App — you can even play back the sound clips. They can be deleted en-masse or one-by-one. These recordings can be disabled in your account under My Activity.

Location

Location history tracks where you are even when you aren't using Google Maps. Google asks whatever device you're using where you are, and holds onto that data. Head to your Timeline to see the full scale of it: if you use Android, Google likely knows where you were at all points in time for years. The personalised services this offers aren't impressive: turn off location history, and you can still use Maps, but won't get recommendations based on places you visited or "useful" ads. Turn it off in Activity controls, and delete existing data in Timeline. Even if you do turn off location history tracking, Google still knows where you are and other apps may nab that information; to fully stop that, you'll also need to turn off Web & App Activity, too.

Other settings

Alongside the above, you can also disable YouTube Watch History and Search History, which Google uses for recommendations, and manage Google Photos, such as turning off facial recognition and removing location data from the metadata of shared photos.

Advertising profiles

If you don't want personalised ads, you can turn them off in Ad Settings, and under Options, stop Google from using your web activity and other information from Google services to personalise ads. You'll still see ads – this isn't a blocker – and Google will lose any personalisation you've requested, such as if you've asked not to be shown specific ads or topic areas. Google will also still collect information such as the subject of the page you're looking at, time of day, and your location, it just won't pair that with your previous browsing history or what you watched on YouTube.

Demographic information, such as your age and gender, can't be deleted but can be updated; if you're trying to avoid Google's reach, there's nothing to say you can't lie here, though Google may well suss out your deception and switch you back to a 35-44 year-old woman, even if you try to tell the company you were actually born a man in 1927.

Topics of interest can be changed or deleted by clicking "turn off"; this information is based on your Activity Controls described above, so if you want Google to stop collecting and using your browsing information to uncover your interests, turn off Web & App Activity and turn off Ad Personalisation. If you want to turn those ad signals back on, scroll back down to "what you've turned off" to re-enable them.

You can also turn off specific advertisers in Ad Settings. Click the name of the company, and Google will reveal why it thinks you're interested – perhaps you visited the advertiser's website or app – and let you click to "turn off" those ads. That doesn't mean you won't ever see ads from that company, but they won't be based on personalised data.

Other ways to staunch the data leak

The best way to limit Google's data collection is to delete your account, but you needn't go that far to staunch the flow. Can't live without Gmail or Maps? You can limit some of the collection by switching to some non-Google products and services where it suits you. For example, ditch Chrome for Firefox or Brave. Use DuckDuckGo rather than Google Search. If you can afford it, ditch your Android for an iPhone. And so on.

You can delete your account entirely – but even then, Google may still keep tracking you via what one report called "passive data", though Google said it doesn't tie your name or other identifiable details to that profile.

Because of that, a more proactive approach may be necessary even for those without Google accounts. As with any online activity, ad blockers such as AdBlock Plus and privacy extensions like Disconnect or Ghostery will stop surveillance systems such as cookies and social trackers. On Android, the Firefox Focus browser has such tools built in; on desktop, consider the Brave browser.

More great stories from WIRED

– The games industry should be worried about Google Stadia

– How the petition to revoke Article 50 went viral

– I tried to keep my baby secret from Facebook and Google

– Care about online privacy? Then change your phone number

#####EOF##### Terrorists Don't Do Movie Plots | WIRED
Terrorists Don't Do Movie Plots

Terrorists Don't Do Movie Plots

Terrorists Don't Do Movie Plots

Sometimes it seems like the people in charge of homeland security spend too much time watching action movies. They defend against specific movie plots instead of against the broad threats of terrorism.

We all do it. Our imaginations run wild with detailed and specific threats. We imagine anthrax spread from crop dusters. Or a contaminated milk supply. Or terrorist scuba divers armed with almanacs. Before long, we're envisioning an entire movie plot, without Bruce Willis saving the day. And we're scared.

Psychologically, this all makes sense. Humans have good imaginations. Box cutters and shoe bombs conjure vivid mental images. "We must protect the Super Bowl" packs more emotional punch than the vague "we should defend ourselves against terrorism."

The 9/11 terrorists used small pointy things to take over airplanes, so we ban small pointy things from airplanes. Richard Reid tried to hide a bomb in his shoes, so now we all have to take off our shoes. Recently, the Department of Homeland Security said that it might relax airplane security rules. It's not that there's a lessened risk of shoes, or that small pointy things are suddenly less dangerous. It's that those movie plots no longer capture the imagination like they did in the months after 9/11, and everyone is beginning to see how silly (or pointless) they always were.

Commuter terrorism is the new movie plot. The London bombers carried bombs into the subway, so now we search people entering the subways. They used cell phones, so we're talking about ways to shut down the cell-phone network.

It's too early to tell if hurricanes are the next movie-plot threat that captures the imagination.

The problem with movie plot security is that it only works if we guess right. If we spend billions defending our subways, and the terrorists bomb a bus, we've wasted our money. To be sure, defending the subways makes commuting safer. But focusing on subways also has the effect of shifting attacks toward less-defended targets, and the result is that we're no safer overall.

Terrorists don't care if they blow up subways, buses, stadiums, theaters, restaurants, nightclubs, schools, churches, crowded markets or busy intersections. Reasonable arguments can be made that some targets are more attractive than others: airplanes because a small bomb can result in the death of everyone aboard, monuments because of their national significance, national events because of television coverage, and transportation because most people commute daily. But the United States is a big country; we can't defend everything.

One problem is that our nation's leaders are giving us what we want. Party affiliation notwithstanding, appearing tough on terrorism is important. Voting for missile defense makes for better campaigning than increasing intelligence funding. Elected officials want to do something visible, even if it turns out to be ineffective.

The other problem is that many security decisions are made at too low a level. The decision to turn off cell phones in some tunnels was made by those in charge of the tunnels. Even if terrorists then bomb a different tunnel elsewhere in the country, that person did his job.

And anyone in charge of security knows that he'll be judged in hindsight. If the next terrorist attack targets a chemical plant, we'll demand to know why more wasn't done to protect chemical plants. If it targets schoolchildren, we'll demand to know why that threat was ignored. We won't accept "we didn't know the target" as an answer. Defending particular targets protects reputations and careers.

We need to defend against the broad threat of terrorism, not against specific movie plots. Security is most effective when it doesn't make arbitrary assumptions about the next terrorist act. We need to spend more money on intelligence and investigation: identifying the terrorists themselves, cutting off their funding, and stopping them regardless of what their plans are. We need to spend more money on emergency response: lessening the impact of a terrorist attack, regardless of what it is. And we need to face the geopolitical consequences of our foreign policy and how it helps or hinders terrorism.

These vague things are less visible, and don't make for good political grandstanding. But they will make us safer. Throwing money at this year's movie plot threat won't.

- - -

Bruce Schneier is the CTO of Counterpane Internet Security and the author of Beyond Fear: Thinking Sensibly About Security in an Uncertain World. You can contact him through his website.

Feds Push Flier Background Checks

Terror Forum Sows Seeds of Jihad

Feds Fear Air Broadband Terror

U.S. Military's Elite Hacker Crew

Hide Out Under a Security Blanket

#####EOF##### RSS Feeds | WIRED
Skip Article Header. Skip to: Start of Article.

RSS Feeds

#####EOF##### Security Researcher, Cybercrime Foe Goes Missing | WIRED
Security Researcher, Cybercrime Foe Goes Missing

Security Researcher, Cybercrime Foe Goes Missing

Security Researcher, Cybercrime Foe Goes Missing

A well-known security researcher and cybercrime foe appears to have gone missing in Bulgaria and is feared harmed, according to a news organization that hosts a blog the researcher co-writes.

Bulgarian researcher Dancho Danchev, who writes for ZDNet's Zero Day blog, is an independent security consultant who's garnered the enmity of cybercriminals for his work tracking and exposing their malicious activity. He has often provided insightful analysis of East European criminal activity and online scams.

His last blog entry was a compilation of his research into the cyberjihad activity of terrorist groups. He was also particularly focused on monitoring the group believed to be behind the Koobface worm, which targets users of Facebook and other social networking sites.

Danchev has reportedly been missing since at least September, when he sent a mysterious letter to a friend in the malware-research community revealing concerns that his apartment was being bugged by Bulgarian law enforcement and intelligence services.

The letter, sent to the friend as "insurance in case things get ugly, " included photos that Danchev purportedly took of a device that he believed was planted in his bathroom by government agents to monitor him. The device appears to be a transformer.

The letter said:

I’m attaching you photos of the “current situation in my bathroom”, courtesy of Bulgarian Law enforcement+intell services who’ve been building a case trying to damage my reputation, for 1.5 years due to my clear pro-Western views+the fact that a few months ago, the FBI Attache in Sofia, Bulgaria recommended me as an expert to Bulgarian CERT -> clearly you can see how they say “You’re Welcome”.

ZDNet, which has been trying unsuccessfully to contact Danchev since August, published the letter and photos Friday in the hope that someone with information about Danchev's whereabouts would come forward.

ZDNet blogger Ryan Naraine, who blogs at Zero Day with Danchev, reported that Danchev had contributed his last blog entry Aug. 18 and that his personal blog was last updated Sept. 11. The letter Danchev apparently sent to his friend about the surveillance on him was received Sept. 9.

Subsequent attempts to contact Danchev by phone, e-mail and postal mail have been unsuccessful, ZDNet reports. A knock on the door at his residence in Bulgaria also went unanswered.

"Last month, we finally got a mysterious message from a local source in Bulgaria that 'Dancho’s alive but he’s in a lot of trouble,'" Naraine wrote. "We were told that he’s in the kind of trouble to keep him away from a computer and telephone, so it would be impossible to make contact with him."

Naraine told Threat Level that Danchev was an active participant on a mailing list where ZDNet's bloggers discuss their stories and would generally contact editors and fellow bloggers once a week to let them know what he was working on. That communication stopped in August. Naraine said that he also hasn't seen Danchev logged into his Skype, Google Talk or instant messaging account for months.

"I’ve been hearing from a lot of people on private lists saying that Dancho is alive," Naraine said. "But no one can say where he is or why he has disappeared off the grid. He was not the kind of guy to just disappear."

#####EOF##### There's No Good Reason to Trust Blockchain Technology | WIRED
There's No Good Reason to Trust Blockchain Technology

There's No Good Reason to Trust Blockchain Technology

La Tigre

There's No Good Reason to Trust Blockchain Technology

La Tigre

In his 2008 white paper that first proposed bitcoin, the anonymous Satoshi Nakamoto concluded with: “We have proposed a system for electronic transactions without relying on trust.” He was referring to blockchain, the system behind bitcoin cryptocurrency. The circumvention of trust is a great promise, but it’s just not true. Yes, bitcoin eliminates certain trusted intermediaries that are inherent in other payment systems like credit cards. But you still have to trust bitcoin—and everything about it.

WIRED OPINION

ABOUT

Bruce Schneier is a security technologist who teaches at the Harvard Kennedy School. He is the author, most recently, of Click Here to Kill Everybody: Security and Survival in a Hyper-Connected World.

Much has been written about blockchains and how they displace, reshape, or eliminate trust. But when you analyze both blockchain and trust, you quickly realize that there is much more hype than value. Blockchain solutions are often much worse than what they replace.

First, a caveat. By blockchain, I mean something very specific: the data structures and protocols that make up a public blockchain. These have three essential elements. The first is a distributed (as in multiple copies) but centralized (as in there’s only one) ledger, which is a way of recording what happened and in what order. This ledger is public, meaning that anyone can read it, and immutable, meaning that no one can change what happened in the past.

The second element is the consensus algorithm, which is a way to ensure all the copies of the ledger are the same. This is generally called mining; a critical part of the system is that anyone can participate. It is also distributed, meaning that you don’t have to trust any particular node in the consensus network. It can also be extremely expensive, both in data storage and in the energy required to maintain it. Bitcoin has the most expensive consensus algorithm the world has ever seen, by far.

Finally, the third element is the currency. This is some sort of digital token that has value and is publicly traded. Currency is a necessary element of a blockchain to align the incentives of everyone involved. Transactions involving these tokens are stored on the ledger.

Private blockchains are completely uninteresting. (By this, I mean systems that use the blockchain data structure but don’t have the above three elements.) In general, they have some external limitation on who can interact with the blockchain and its features. These are not anything new; they’re distributed append-only data structures with a list of individuals authorized to add to it. Consensus protocols have been studied in distributed systems for more than 60 years. Append-only data structures have been similarly well covered. They’re blockchains in name only, and—as far as I can tell—the only reason to operate one is to ride on the blockchain hype.

All three elements of a public blockchain fit together as a single network that offers new security properties. The question is: Is it actually good for anything? It's all a matter of trust.

Trust is essential to society. As a species, humans are wired to trust one another. Society can’t function without trust, and the fact that we mostly don’t even think about it is a measure of how well trust works.

The word “trust” is loaded with many meanings. There’s personal and intimate trust. When we say we trust a friend, we mean that we trust their intentions and know that those intentions will inform their actions. There’s also the less intimate, less personal trust—we might not know someone personally, or know their motivations, but we can trust their future actions. Blockchain enables this sort of trust: We don’t know any bitcoin miners, for example, but we trust that they will follow the mining protocol and make the whole system work.

Most blockchain enthusiasts have a unnaturally narrow definition of trust. They’re fond of catchphrases like “in code we trust,” “in math we trust,” and “in crypto we trust.” This is trust as verification. But verification isn’t the same as trust.

In 2012, I wrote a book about trust and security, Liars and Outliers. In it, I listed four very general systems our species uses to incentivize trustworthy behavior. The first two are morals and reputation. The problem is that they scale only to a certain population size. Primitive systems were good enough for small communities, but larger communities required delegation, and more formalism.

The third is institutions. Institutions have rules and laws that induce people to behave according to the group norm, imposing sanctions on those who do not. In a sense, laws formalize reputation. Finally, the fourth is security systems. These are the wide varieties of security technologies we employ: door locks and tall fences, alarm systems and guards, forensics and audit systems, and so on.

These four elements work together to enable trust. Take banking, for example. Financial institutions, merchants, and individuals are all concerned with their reputations, which prevents theft and fraud. The laws and regulations surrounding every aspect of banking keep everyone in line, including backstops that limit risks in the case of fraud. And there are lots of security systems in place, from anti-counterfeiting technologies to internet-security technologies.

In his 2018 book, Blockchain and the New Architecture of Trust, Kevin Werbach outlines four different “trust architectures.” The first is peer-to-peer trust. This basically corresponds to my morals and reputational systems: pairs of people who come to trust each other. His second is leviathan trust, which corresponds to institutional trust. You can see this working in our system of contracts, which allows parties that don’t trust each other to enter into an agreement because they both trust that a government system will help resolve disputes. His third is intermediary trust. A good example is the credit card system, which allows untrusting buyers and sellers to engage in commerce. His fourth trust architecture is distributed trust. This is emergent trust in the particular security system that is blockchain.

What blockchain does is shift some of the trust in people and institutions to trust in technology. You need to trust the cryptography, the protocols, the software, the computers and the network. And you need to trust them absolutely, because they’re often single points of failure.

When that trust turns out to be misplaced, there is no recourse. If your bitcoin exchange gets hacked, you lose all of your money. If your bitcoin wallet gets hacked, you lose all of your money. If you forget your login credentials, you lose all of your money. If there’s a bug in the code of your smart contract, you lose all of your money. If someone successfully hacks the blockchain security, you lose all of your money. In many ways, trusting technology is harder than trusting people. Would you rather trust a human legal system or the details of some computer code you don’t have the expertise to audit?

Blockchain enthusiasts point to more traditional forms of trust—bank processing fees, for example—as expensive. But blockchain trust is also costly; the cost is just hidden. For bitcoin, that's the cost of the additional bitcoin mined, the transaction fees, and the enormous environmental waste.

Blockchain doesn’t eliminate the need to trust human institutions. There will always be a big gap that can’t be addressed by technology alone. People still need to be in charge, and there is always a need for governance outside the system. This is obvious in the ongoing debate about changing the bitcoin block size, or in fixing the DAO attack against Ethereum. There’s always a need to override the rules, and there’s always a need for the ability to make permanent rules changes. As long as hard forks are a possibility—that’s when the people in charge of a blockchain step outside the system to change it—people will need to be in charge.

Any blockchain system will have to coexist with other, more conventional systems. Modern banking, for example, is designed to be reversible. Bitcoin is not. That makes it hard to make the two compatible, and the result is often an insecurity. Steve Wozniak was scammed out of $70K in bitcoin because he forgot this.

Blockchain technology is often centralized. Bitcoin might theoretically be based on distributed trust, but in practice, that’s just not true. Just about everyone using bitcoin has to trust one of the few available wallets and use one of the few available exchanges. People have to trust the software and the operating systems and the computers everything is running on. And we've seen attacks against wallets and exchanges. We’ve seen Trojans and phishing and password guessing. Criminals have even used flaws in the system that people use to repair their cell phones to steal bitcoin.

Moreover, in any distributed trust system, there are backdoor methods for centralization to creep back in. With bitcoin, there are only a few miners of consequence. There’s one company that provides most of the mining hardware. There are only a few dominant exchanges. To the extent that most people interact with bitcoin, it is through these centralized systems. This also allows for attacks against blockchain-based systems.

These issues are not bugs in current blockchain applications, they’re inherent in how blockchain works. Any evaluation of the security of the system has to take the whole socio-technical system into account. Too many blockchain enthusiasts focus on the technology and ignore the rest.

To the extent that people don’t use bitcoin, it’s because they don’t trust bitcoin. That has nothing to do with the cryptography or the protocols. In fact, a system where you can lose your life savings if you forget your key or download a piece of malware is not particularly trustworthy. No amount of explaining how SHA-256 works to prevent double-spending will fix that.

Similarly, to the extent that people do use blockchains, it is because they trust them. People either own bitcoin or not based on reputation; that’s true even for speculators who own bitcoin simply because they think it will make them rich quickly. People choose a wallet for their cryptocurrency, and an exchange for their transactions, based on reputation. We even evaluate and trust the cryptography that underpins blockchains based on the algorithms’ reputation.

To see how this can fail, look at the various supply-chain security systems that are using blockchain. A blockchain isn’t a necessary feature of any of them. The reasons they’re successful is that everyone has a single software platform to enter their data in. Even though the blockchain systems are built on distributed trust, people don’t necessarily accept that. For example, some companies don’t trust the IBM/Maersk system because it’s not their blockchain.

Irrational? Maybe, but that’s how trust works. It can’t be replaced by algorithms and protocols. It’s much more social than that.

Still, the idea that blockchains can somehow eliminate the need for trust persists. Recently, I received an email from a company that implemented secure messaging using blockchain. It said, in part: “Using the blockchain, as we have done, has eliminated the need for Trust.” This sentiment suggests the writer misunderstands both what blockchain does and how trust works.

Do you need a public blockchain? The answer is almost certainly no. A blockchain probably doesn’t solve the security problems you think it solves. The security problems it solves are probably not the ones you have. (Manipulating audit data is probably not your major security risk.) A false trust in blockchain can itself be a security risk. The inefficiencies, especially in scaling, are probably not worth it. I have looked at many blockchain applications, and all of them could achieve the same security properties without using a blockchain—of course, then they wouldn’t have the cool name.

Honestly, cryptocurrencies are useless. They're only used by speculators looking for quick riches, people who don't like government-backed currencies, and criminals who want a black-market way to exchange money.

To answer the question of whether the blockchain is needed, ask yourself: Does the blockchain change the system of trust in any meaningful way, or just shift it around? Does it just try to replace trust with verification? Does it strengthen existing trust relationships, or try to go against them? How can trust be abused in the new system, and is this better or worse than the potential abuses in the old system? And lastly: What would your system look like if you didn’t use blockchain at all?

If you ask yourself those questions, it's likely you'll choose solutions that don't use public blockchain. And that'll be a good thing—especially when the hype dissipates.

WIRED Opinion publishes pieces written by outside contributors and represents a wide range of viewpoints. Read more opinions here. Submit an op-ed at opinion@wired.com


More Great WIRED Stories

#####EOF##### The Navy's New Robot Looks and Swims Just Like a Shark | WIRED
The Navy's New Robot Looks and Swims Just Like a Shark

The Navy's New Robot Looks and Swims Just Like a Shark

The GhostSwimmer vehicle undergoes testing.
Edward Guttierrez/US Navy

The Navy's New Robot Looks and Swims Just Like a Shark

The GhostSwimmer vehicle undergoes testing.
Edward Guttierrez/US Navy

The American military does a lot of work in the field of biomimicry, stealing designs from nature for use in new technology. After all, if you're going to design a robot, where better to draw inspiration than from billions of years of evolution? The latest result of these efforts is the GhostSwimmer: The Navy’s underwater drone designed to look and swim like a real fish, and a liability to spook the bejeezus out of any beach goer who’s familiar with Jaws.

The new gizmo, at five feet long and nearly 100 pounds, is about the size of an albacore tuna but looks more like a shark, at least from a distance. It’s part of an experiment to explore the possibilities of using biomimetic, unmanned, underwater vehicles, and the Navy announced it wrapped up testing of the design last week.

The robot uses its tail for propulsion and control, like a real fish. It can operate in water as shallow as 10 inches or dive down to 300 feet. It can be controlled remotely via a 500-foot tether, or swim independently, periodically returning to the surface to communicate. Complete with dorsal and pectoral fins, the robofish is stealthy too: It looks like a fish and moves like a fish, and, like other underwater vehicles, is difficult to spot even if you know to look for it.

Office of Naval Research
Office of Naval Research

Down the line, it could be used for intelligence, surveillance, and reconnaissance missions, when it’s not assigned to more mundane tasks like inspecting the hulls of friendly ships. Animal lovers will be glad to hear that the GhostSwimmer could take the jobs of the bottlenose dolphins and California sea lions the Navy currently trains to spot underwater mines and recover equipment.

The GhostSwimmer joins the ranks of animal-based awesome/creepy robots like the "Cheetah" that can run at nearly 30 mph, the Stickybot that climbs like a gecko, and the cockroach-inspired iSprawl that can cover 7.5 feet per second. And it may get a baby brother: The Department of Homeland Security has been funding development of a similar, smaller robot called the BIOSwimmer.

True to military form, there’s a whole suite of acronyms to go along with the new toy: The UUV (unmanned underwater vehicle) has been in testing at the JEBLC-FS (Joint Expeditionary Base Little Creek-Fort Story), and was developed by the CRIC (Chief of Naval Operations Rapid Innovation Cell) project, called Silent NEMO (actually, this one doesn’t seem to stand for anything). It was developed by the Advanced Systems Group at Boston Engineering, a Navy contractor that specializes in the development of robotics, unmanned systems and something called "special tactical equipment". The company and Navy haven't said much about when GhostSwimmer might be deployed or how much it would cost, but next time you're at the beach and see a fin sticking out of the water, it might be a killer shark—or it might just be a Navy robot.

#####EOF##### Magazine | WIRED

Latest Videos

More Videos
Science

Why Averaging 95% From the Free-Throw Line is Almost Impossible

The very best basketball free throw shooters can sink the ball about 90 percent of the time. What would it take to get to 95 percent? WIRED's Robbie Gonzalez steps up to the foul line with top shooter Steve Nash to find out.

#####EOF##### Winter Olympic Cyberattacks Have Already Started—and May Not Be Over | WIRED
Hackers Have Already Targeted the Winter Olympics—and May Not Be Done

Hackers Have Already Targeted the Winter Olympics—and May Not Be Done

The Pyeongchang Olympics are already under cyberattack on at least two fronts, with no clear endgame in sight.
Chung Sung-Jun/Getty Images

Hackers Have Already Targeted the Winter Olympics—and May Not Be Done

The Pyeongchang Olympics are already under cyberattack on at least two fronts, with no clear endgame in sight.
Chung Sung-Jun/Getty Images

The Olympics have always been a geopolitical microcosm: beyond the athletic match-ups, they provide a vehicle for diplomacy and propaganda, and even, occasionally, a proxy for war. It stands to reason, then, that in 2018 they've also become a nexus of hacker skullduggery. The Olympics unfolding next week in Pyeongchang may already be the most thoroughly hacked in the games' history—with potentially more surprises to come.

More so than any previous Olympics, the run-up to Pyeongchang has been plagued by apparent state-sponsored hackers: One Russia-linked campaign has stolen and leaked embarrassing documents from Olympic organizations, while security researchers have tracked another operation, possibly North Korean, that appears to be spying on South Korean Olympics-related organizations.

Security researchers tracking those two operations say the full scope of either remains far from clear, leaving the looming question of whether they could still present new disruptions timed to unfold during the games themselves. And more broadly, the intrusions signal that the geopolitical tensions that have long underscored the Olympics now extend into the digital realm as well.

"The Olympics have always been the most politicized sporting event of them all," says Thomas Rid, a professor of strategic studies at Johns Hopkins University's School of Advanced International Studies. "It’s not a surprise at all that they've become a high-profile target for hacking."

Operation GoldDragon

The far stealthier of the two known Olympics hacking operations—and perhaps the most troubling—has quietly targeted South Korean Olympics-related organizations for well over a month. Researchers for security firm McAfee discovered just this week that the campaign, which they've named Operation GoldDragon, has attempted to plant three distinct spyware tools on target machines that would enable hackers to deeply scour the compromised computers' contents. McAfee identifies those malicious tools by the names GoldDragon, BravePrince, and GHOST419.

'The Olympics have always been the most politicized sporting event of them all.'

Thomas Rid, Johns Hopkins University

The firm's researchers say they've linked those malware samples to a phishing campaign that lures victims with Korean-language emails, indicating South Korean targets. The messages, which spoof a note from South Korea's National Counter-Terrorism Center—and, according to McAfee, were timed to actual terrorism drills in Pyeongchang—targeted a BCC'd list of more than 300 Olympics-related targets, McAfee says, with only the address "icehockey@pyeongchang2018.com" visible in its "to" line. Analyzing the email's metadata, however, McAfee identified other intended victims, including local tourism organizations in Pyeongchang, ski resorts, transportation, and key departments of the Pyeongchang Olympics effort.

The hackers attached a Korean-language Word document to the email, crafted to run a malicious script on the target machine. If the victim clicked "enable content" after opening that tainted attachment, they would give the attacker remote access to the computer. The attackers could use that initial, temporary foothold to install their spyware for more persistent visibility into any hacked machine. McAfee notes that script is hidden in an innocent-looking image file with clever steganography and other obfuscation tactics.

McAfee traced the phishing scheme to a remote server in the Czech Republic, registered with fake credentials to a South Korean government ministry. And they found publicly accessible logs on that remote server that showed victim machines were in fact connecting to it from South Korea, a sign of actual infections. "Was this a successful campaign? The answer is yes," says McAfee chief scientist Raj Samani. "We know that it's had victims."

Despite all of those findings, the origin and the ultimate aim of that relatively sophisticated malware campaign remains unclear. But based on the Korean language and targeting, Samani hints that his working theory points to a North Korean espionage operation keeping tabs on its southern neighbor.

That spying may seem to run counter to a recent thawing of diplomatic relations between the two Koreas, one that has even resulted in a combination of the two countries' national women's hockey teams. But North Korea likely wouldn't call off its aggressive hacking over a momentary olive branch. "I would guess it's a 'keep your friends close and your enemies closer' approach," Samani says.

Anti-Doping Bears

A far louder and more explicit hacker threat has come from a notorious outfit linked with the Kremlin's GRU military intelligence agency, known as Fancy Bear, or APT28—according to many security researchers, almost certainly the same Fancy Bear that hacked the Democratic National Committee and Clinton campaign in the midst of the 2016 election.

'Was this a successful campaign? The answer is yes.'

Raj Samani, McAfee

Since as early as September of that year, those brazen hackers have repeatedly targeted athletic organizations, with the intent of exposing evidence of what they claim is widespread doping in Western countries, an apparent retaliation for the ban of Russian athletes from the 2016 and 2018 games for the same charge. "We will start with the US team which has disgraced its name by tainted victories," the hackers wrote in a message on their website when they first began leaking documents from the World Anti-Doping Association in September of 2016. "Wait for sensational proof of famous athletes taking doping substances any time soon."

At the time, the Fancy Bear hackers released the private medical records of star US athletes Serena Williams, Venus Williams, and Simone Biles, touting permissions they had received to use potentially performance-enhancing drugs to treat attention deficit disorder and muscle inflammation.

This year, Fancy Bear planned its Olympic-hacking far more proactively. Starting in early January, they published two collections of hacked documents from Olympics-related agencies: One set revealed political tensions between officials at the International Olympic Committee and the WADA officials tasked with policing the games' athletes. A second release later in the month again pointed to special permissions given to certain athletes—a member of the Swedish luge team takes asthma medication, for instance—and an Italian athlete who had at one point missed a drug test. And a third leak on Wednesday pointed to the case of Shawn Barber, a Canadian pole vaulter allowed to compete in the 2016 games despite at one point testing positive for cocaine.

None of Fancy Bear's recent releases has proven any clear wrongdoing—at least, nothing remotely comparable to Russia's systematic doping program for thousands of athletes—and all have generally been ignored by the sporting world and the Western media. But Russian state news outlets have nonetheless faithfully rehashed the leaks. And Johns Hopkins' Rid says the hacks, like the attacks on the DNC and Clinton campaign in 2016, have an effect that's not easily measured or dismissed.

Rid compares the operation to the KGB's tactics in 1984, after Russia was banned from the Summer Olympics in Los Angeles. The spy agency responded by mailing forged KKK pamphlets threatening race-based attacks to members of 20 visiting Asian and African teams. "There’s no great goal they want to achieve," Rid says. "It’s more one of throwing wrenches and sand into the gears of a machine, to make life more difficult for your adversary, engender debate and internal conflict among allies to distract from the confrontation that’s harming you."

More Ammunition

Fancy Bear may yet have more leaks in store. Security firms Trend Micro and ThreatConnect have linked the group's propaganda campaign with collections of spoofed domains they've discovered, likely used in the group's well-honed phishing attacks. Many of those fake domains haven't yet resulted in leaks, but may have nonetheless led to compromises of Olympics-related organizations. They've spotted registrations for spoofed domains designed to mimic the US Anti-Doping Agency, British counterpart UK Anti-Doping, the Olympic Council of Asia, European Ice Hockey Federation, the International Ski Federation, the International Biathlon Union, the International Bobsleigh and the Skeleton Federation.

'There’s no reason to think they’ll conclude operations just because of what’s already been released.'

Kyle Ehmke, ThreatConnect

Security firms, to be clear, have no evidence that those organizations have been compromised. But they point out that the same group that's registered fake domains that seem to have been used in earlier Fancy Bear phishing and leaking operations registered fake domains for those targets, too. Any one of them might be a source of new, disruptive secret-spilling before or during the games. "In the run-up to the Olympics, we’d expect to see continuing activity from Fancy Bear and other APTs," says ThreatConnect researcher Kyle Ehmke, using the abbreviation for "Advanced Persistent Threat," an industry term for sophisticated state-sponsored hackers. "There’s no reason to think they’ll conclude operations just because of what’s already been released."

In the parallel case of the likely North Korean espionage campaign, McAfee's chief scientist Samani notes that hacking operation could also get worse before it gets better. If the hackers behind that campaign change their motivation, nothing prevents them from using machines they've compromised on target networks to launch attacks that go beyond espionage, such as destroying data or disrupting networks.

"We do know that other campaigns have gone down the intelligence path and then used it as a vehicle to cause destruction," Samani says, noting that there's no indication of the hackers' motivation beyond mere spying one way or another in this case. "We have no idea what may follow."

All of those indicators of digital meddling, from leaks to espionage campaigns, don't quite add up to a cyberdoomsday scenario. But for the Olympics' organizers—or the athletes waiting for their once-in-a-lifetime spotlight—the notion of multiple, determined hacker teams targeting the world's biggest sporting event should provide enough anxieties to last until the closing ceremony.

More Meddling

#####EOF##### A Google Site Meant to Protect You Is Helping Hackers Attack You | WIRED
A Google Site Meant to Protect You Is Helping Hackers Attack You

A Google Site Meant to Protect You Is Helping Hackers Attack You

Man hacking laptop
Getty

A Google Site Meant to Protect You Is Helping Hackers Attack You

Man hacking laptop
Getty

Before companies like Microsoft and Apple release new software, the code is reviewed and tested to ensure it works as planned and to find any bugs.

Hackers and cybercrooks do the same. The last thing you want if you're a cyberthug is for your banking Trojan to crash a victim's system and be exposed. More importantly, you don't want your victim's antivirus engine to detect the malicious tool.

So how do you maintain your stealth? You submit your code to Google's VirusTotal site and let it do the testing for you.

It's long been suspected that hackers and nation-state spies are using Google's antivirus site to test their tools before unleashing them on victims. Now Brandon Dixon, an independent security researcher, has caught them in the act, tracking several high-profile hacking groups—including, surprisingly, two well-known nation-state teams—as they used VirusTotal to hone their code and develop their tradecraft.

"There's certainly irony" in their use of the site, Dixon says. "I wouldn't have expected a nation state to use a public system to do their testing."

VirusTotal is a free online service—launched in 2004 by Hispasec Sistemas in Spain and acquired by Google in 2012—that aggregates more than three dozen antivirus scanners made by Symantec, Kaspersky Lab, F-Secure and others. Researchers, and anyone else who finds a suspicious file on their system, can upload the file to the site to see if any of the scanners tag it malicious. But the site, meant to protect us from hackers, also inadvertently provides hackers the opportunity to tweak and test their code until it bypasses the site's suite of antivirus tools.

Dixon has been tracking submissions to the site for years and, using data associated with each uploaded file, has identified several distinct hackers or hacker teams as they've used VirusTotal to refine their code. He's even been able to identify some of their intended targets.

He can do this because every uploaded file leaves a trail of metadata available to subscribers of VirusTotal's professional-grade service. The data includes the file's name and a timestamp of when it was uploaded, as well as a hash derived from the uploader's IP address and the country from which the file was submitted based on the IP address. Though Google masks the IP address to make it difficult to derive from the hash, the hash still is helpful in identifying multiple submissions from the same address. And, strangely, some of the groups Dixon monitored used the same addresses repeatedly to submit their malicious code.

Using an algorithm he created to parse the metadata, Dixon spotted patterns and clusters of files submitted by two well-known cyberespionage teams believed to be based in China, and a group that appears to be in Iran. Over weeks and months, Dixon watched as the attackers tweaked and developed their code and the number of scanners detecting it dropped. He could even in some cases predict when they might launch their attack and identify when some of the victims were hit—code that he saw submitted by some of the attackers for testing later showed up at VirusTotal again when a victim spotted it on a machine and submitted it for detection.

Tracking the Infamous Comment Crew

One of the most prolific groups he tracked belongs to the infamous Comment Crew team, also known by security researchers as APT1. Believed to be a state-sponsored group tied to China's military, Comment Crew reportedly is responsible for stealing terabytes of data from Coca-Cola, RSA and more than 100 other companies and government agencies since 2006. More recently, the group has focused on critical infrastructure in the U.S., targeting companies like Telvent, which makes control system software used in parts of the U.S. electrical power grid, oil and gas pipelines and in water systems. The group Dixon tracked isn't the main Comment Crew outfit but a subgroup of it.

He also spotted and tracked a group known by security researchers as NetTraveler. Believed to be in China, NetTraveler has been hacking government, diplomatic and military victims for a decade, in addition to targeting the office of the Dalai Lama and supporters of Uyghur and Tibetan causes.

The groups Dixon observed, apparently ignorant of the fact that others could watch them, did little to conceal their activity. However, at one point the Comment Crew did begin using unique IP addresses for each submission, suggesting they suddenly got wise to the possibility that they were being watched.

Dixon got the idea to mine VirusTotal's metadata after hearing security researchers repeatedly express suspicions that hackers were using the site as a testing tool. Until now he's been reluctant to publicly discuss his work on the metadata, knowing it would prompt attackers to change their tactics and make it harder to profile them. But he says there is now enough historical data in the VirusTotal archive that other researchers can mine it to identify groups and activity he may have missed. This week he's releasing code he developed for analyzing the metadata so others can do their own research.

Dixon says it wasn't initially easy to spot groups of attackers in the data. "Finding them turned out to be a very difficult problem to solve," he says. "When I first looked at this data, I didn't know what I should be looking for. I didn't know what made an attacker until I found an attacker."

Brandon Dixon

http://blog.9bplus.com/

Surreptitiously Watching Hackers Hone Their Attacks

The data provides a rare and fascinating look at the inner workings of the hacker teams and the learning curve they followed as they perfected their attacks. During the three months he observed the Comment Crew gang, for example, they altered every line of code in their malware's installation routine and added and deleted different functions. But in making some of the changes to the code, the hackers screwed up and disabled their Trojan at one point. They also introduced bugs and sabotaged other parts of their attack. All the while, Dixon watched as they experimented to get it right.

Between August and October 2012, when Dixon watched them, he mapped the Crew's operations as they modified various strings in their malicious files, renamed the files, moved components around, and removed the URLs for the command-and-control servers used to communicate with their attack code on infected machines. They also tested out a couple of packer tools—used to reduce the size of malware and encase it in a wrapper to make it harder for virus scanners to see and identify malicious code.

Some of their tactics worked, others did not. When they did work, the attackers often were able to reduce to just two or three the number of engines detecting their code. It generally took just minor tweaks to make their attack code invisible to scanners, underscoring how hard it can be for antivirus engines to keep pace with an attacker's shapeshifting code.

There was no definitive pattern to the kinds of changes that reduced the detection rate. Although all of the samples Dixon tracked got detected by one or more antivirus engine, those with low detection rates were often found only by the more obscure engines that are not in popular use.

Though the Crew sometimes went to great lengths to alter parts of their attack, they curiously never changed other telltale strings—ones pertaining to the Trojan's communication with command servers, for example, remained untouched, allowing Dixon to help develop signatures to spot and halt the malicious activity on infected machines. The Crew also never changed an encryption key they used for a particular attack—derived from an MD5 hash of the string Hello@)!0. And most of the time, the Crew used just three IP addresses to make all of their submissions to VirusTotal before suddenly getting wise and switching to unique IP addresses. Given the number of mistakes the group made, he suspects those behind the code were inexperienced and unsupervised.

Connecting Attacks to Victims

At times, Dixon could track files he saw uploaded to VirusTotal and connect them to victims. And sometimes he could track how much time passed between the end of testing and the launch of an attack. Most of the time, Comment Crew launched its attack within hours or days of testing. For example, on August 20, 2012 the group introduced a bug in their code that never got fixed. The sample, with bug intact, showed up on a victim's machine within two days of it being tested.

Dixon tracked NetTraveler in much the same way that he tracked the Comment Crew. The Travelers showed up on VirusTotal in 2009 and appeared to gradually grow more prolific over time, more than doubling the number of files submitted each year. In 2009, the hackers submitted just 33 files to the site, but last year submitted 391 files. They've already submitted 386 this year.

They made it particularly easy to track their code in the wild because even the emails and attachments they used in their phishing campaigns got tested on VirusTotal. More surprising, they even uploaded files they'd stolen from victims's machines. Dixon found calendar documents and attachments taken from some of the group's Tibetan victims uploaded to VirusTotal. He thinks, ironically, that the hackers may have been testing the files to see if they were infected before opening them on their own machines.

The unknown hacker or group of hackers that Dixon tracked from Iran popped up on VirusTotal this past June. In just a month, the party uploaded about 1,000 weaponized documents to the site and showed considerable skill in evading detection. In some cases, they even took old exploits that had been circling in the wild for two years and managed to tweak them enough to bypass all of the virus scanners. Dixon also spotted what appeared to be members of the PlugX hacking group uploading files to the site. PlugX is a family of malware believed to be from China that started appearing last year in the wild and has evolved over time. The PlugX group has uploaded about 1,600 components to VirusTotal since April 2013, and tends to use a unique IP address each time.

Now that the activity of hacking groups on VirusTotal has been exposed, they'll no doubt continue to use the site but alter their ways to better avoid tracking. Dixon is fine with that. As long as security companies now have confirmation that some of the code uploaded to the site is pre-attack code, it gives them an opportunity to look for telltale signs and craft their signatures and other defense mechanisms before the code is released in the wild.

#####EOF##### Ideas | WIRED

Latest Videos

More Videos
Science

Why Averaging 95% From the Free-Throw Line is Almost Impossible

The very best basketball free throw shooters can sink the ball about 90 percent of the time. What would it take to get to 95 percent? WIRED's Robbie Gonzalez steps up to the foul line with top shooter Steve Nash to find out.

#####EOF##### Can You Spot the Contraband in These Airport Baggage X-Rays? | WIRED
Can You Spot the Contraband in These Airport Baggage X-Rays?

Can You Spot the Contraband in These Airport Baggage X-Rays?

Can You Spot the Contraband in These Airport Baggage X-Rays?

The TSA has a long list of things you can't bring onto a commercial flight these days. Scissors. Cigarette lighters. Car airbags. Pool cues. And of course, guns, knives, bombs, and other weapons.

If you've taken a plane (or follow the TSA on Instagram), you've probably wondered how the airport security officers who scan carry-on bags watch for all those threats simultaneously. You've probably pondered how well you'd do the job. And—admit it—you've craned your neck to peek at their screens, trying to suss out the contents of someone else's carry-on.

Here's your chance to take a closer look: The gallery above includes eight x-ray images of luggage, each containing contraband of some sort, including firearms (some real, some fake), knives, and, most devious of all, excessive liquids and gels.

SIGN UP TODAY

Sign up for the Daily newsletter and never miss the best of WIRED.

Simulscan, an Italian company that offers computer-based x-ray screening training, provided an inside look at the screening process when it gave us these photos. CEO Roberto Sergnese was a security expert at Continental, PanAm, and American Airlines before starting the company. He says becoming adept at checking luggage for contraband requires answering three questions: What are you looking for? What does it look like? What does it look like in an x-ray image?

"You don't have to recognize everything inside a bag," Sergnese says. That simply isn't feasible. The trick is knowing what the threats are, and how to spot them. That means knowing, say, how a terrorist might fashion an improvised explosive device. "Nobody will come with a bomb like in the cartoons," he says.

Of course, he wasn't about to say just what TSA agents and others are looking for, citing, as you'd expect, security. But ultimately, you're looking for anomalies. Things that don't look quite right. That's where experience comes in: The more totally ordinary bags you see, the easier it is to spot potentially dangerous deviations from the norm.

For this quiz, you don't have that experience, or, presumably, any official training. You do, however, have a few advantages: You know there's something to find in each image. You haven't just spent hours doing this, watching socks and skivvies and bottles of shampoo no bigger than 3.4 ounces roll by in an unending torrent. And you don't have a growing line of exasperated travelers getting impatient while you stare at their belongings. So get to it.

More Great WIRED Stories

#####EOF##### CONTACT US | WIRED
Skip Article Header. Skip to: Start of Article.

CONTACT US

#####EOF##### Facebook Hires Up Three of Its Biggest Privacy Critics | WIRED
Facebook Hires Up Three of Its Biggest Privacy Critics

Facebook Hires Up Three of Its Biggest Privacy Critics

Nate Cardozo had been a senior staff attorney at the Electronic Frontier Foundation before Facebook scooped him up, along with Access Now's Nathan White and OTI's Robyn Greene.
Noam Galai/Getty Images

Facebook Hires Up Three of Its Biggest Privacy Critics

Nate Cardozo had been a senior staff attorney at the Electronic Frontier Foundation before Facebook scooped him up, along with Access Now's Nathan White and OTI's Robyn Greene.
Noam Galai/Getty Images

For years, critics have taken aim at Facebook's privacy missteps, from the Cambridge Analytica scandal to this week's revelation that Facebook has paid people—including minors—to let it spy on all of their online activity, potentially even including their encrypted private messages. Which makes it a potentially very big deal that over the last several weeks, the company has quietly hired three prominent privacy advocates, all outspoken critics, ostensibly to help right the ship.

In December, Facebook hired Nathan White away from the digital rights nonprofit Access Now, and put him in the role of privacy policy manager. On Tuesday of this week, lawyers Nate Cardozo, of the privacy watchdog Electronic Frontier Foundation, and Robyn Greene, of New America's Open Technology Institute, announced they also are going in-house at Facebook. Cardozo will be the privacy policy manager of WhatsApp, while Greene will be Facebook's new privacy policy manager for law enforcement and data protection.

"Whether they’ll be able to be effective inside what’s become a big bureaucracy that makes money off of knowing a ton about us remains to be seen."

Jennifer Granick, ACLU

These three people are lions in the world of data privacy. (WIRED has interviewed all three for various stories about privacy risks.) And they have been particularly vocal critics of Facebook. By bringing them in-house, Facebook sends the message that it’s going to give real decisionmaking power to people who deeply understand the ways in which the social media site and its family of apps undermine the privacy of its users. The open question is whether Facebook will actually listen.

Privacy advocates have so far struck a note of cautious optimism. "Nate, Robyn, and Nathan know the challenges, and they wouldn’t go to Facebook unless they saw a real opportunity to make a meaningful difference. They are all going to try to move fast and break things—to benefit privacy," said privacy expert and ACLU attorney Jennifer Granick in an email to WIRED. "Whether they’ll be able to be effective inside what’s become a big bureaucracy that makes money off of knowing a ton about us remains to be seen."

Jen King, director of consumer privacy at Stanford’s Center for Internet and Society, thinks it's a sign Facebook may be ready to actually take privacy seriously. "It's possible that Facebook has finally gotten the memo and is really trying to make change," King told WIRED. She also noted, though, that Facebook has decided to bolster its privacy credentials fairly late in the game, especially given that its irresponsible handling of user data led to an Federal Trade Commission consent decree all the way back in 2011. The FTC is currently investigating allegations that Facebook has since broken those promises. But with increased scrutiny, and more regulatory power coming from Europe and elsewhere, Facebook has almost no other choice but to get with the program.

A skeptical view would hold that Facebook made the hires in part to silence three critics, and Facebook has certainly merited skepticism. But those who know the trio argue that they've joined in good faith, and would leave if they found themselves unable to effect positive change from within.

"Nate, Robyn, and Nathan ... are people of deep conviction," says David O'Brien, assistant research director for privacy and security at Harvard's Berkman Klein Center for Internet and Society. "They also have strong moral compasses. I have to think they would not have accepted these roles at Facebook without being assured their contributions would be taken seriously."

"Hiring a few people doesn't change culture, especially in an organization that has become as large and sprawling as Facebook."

David O'Brien, Harvard University

In the past, for instance, Cardozo has called Facebook "creepy," adding that its "business model depends on our collective confusion and apathy about privacy. That’s wrong, as a matter of both ethics and law." For years he worked on EFF's annual report ranking tech companies on how well they safeguard user privacy, which has often ranked WhatsApp and Facebook terribly. In December, Cardozo's colleagues at EFF concluded "Facebook has never deserved your trust."

“If you know me at all, you’ll know this isn’t a move I’d make lightly,” Cardozo wrote in a Facebook post announcing his new job. “After the privacy beating Facebook’s taken over the last year, I was skeptical too. But the privacy team I’ll be joining knows me well, and knows exactly how I feel about tech policy, privacy, and encrypted messaging. And that’s who they want at managing privacy at WhatsApp.”

Besides, Facebook will persist with or without privacy-focused employees. That makes the "if you can't beat them, join them" strategy more palatable. "Hiring a few people doesn't change culture, especially in an organization that has become as large and sprawling as Facebook," said O'Brien. "I take this as a sign that Facebook is at the very least interested in exploring what change might look like."

There's a hope, also, that White, Cardozo, and Greene will not just help bolster Facebook's privacy cred but also open up helpful conversations between their former advocacy worlds and Facebook's leadership.

And change is coming. After years of keeping WhatsApp, Facebook, and Instagram relatively separate, Zuckerberg has grand plans for uniting the messaging components of those platforms so that people can communicate across all three. This will be a big test for WhatsApp, and therefore Cardozo. WhatsApp has had full, default end-to-end encryption since 2016, and Cardozo will be tasked with helping to make sure that encryption isn't undermined by combining the services.

It will be very hard to know from the outside whether the gamble for Cardozo, White, and Greene to go inside Facebook pays off. "Once people go on the inside it's difficult for them to talk publicly," notes King.

After first agreeing to talk to WIRED for this story, Cardozo declined after Facebook's communications team got involved. Greene and White did not respond to requests for comment. WIRED has reached out to Facebook for comment and will update this story if we hear back. In her announcement on Twitter, Greene called Facebook’s privacy team “incredible.” In his announcement Cardozo mentioned of “enormous challenge” the job posed. That might be putting it mildly.


More Great WIRED Stories

#####EOF##### Meet Our Team | WIRED
Meet Our Team

Meet Our Team

executive
  • Nicholas Thompson
  • Editor in chief

  • Anna Goldwater Alexander
  • Director of photography
  • Ryan Aspell
  • Director, brand development
  • Alex Baker-Whitcomb
  • Manager of audience development
  • Gregory Barber
  • Associate editor
  • Brian Barrett
  • News editor
  • Jahna Berry
  • Head of content operations
  • Sara Bogush
  • Associate director of analytics
  • Kam Burns
  • Social media coordinator
  • Michael Calore
  • Senior editor
  • Indu Chandrasekhar
  • Director of audience development
  • Casey Chin
  • Digital art director
  • Kimberly Chua
  • Senior digital producer
  • Samantha Cooper
  • Senior photo editor, platforms
  • Aidan Corrigan
  • Supervising producer
  • Alex Davies
  • Senior associate editor
  • Jay Dayrit
  • Editorial operations manager
  • Maya Draisin
  • VP of marketing
  • Emily Dreyfuss
  • Senior writer
  • Brian Dustrud
  • Copy chief
  • Jon J. Eilenberg
  • Articles editor
  • Sarah Fallon
  • Deputy web editor
  • Meghann Farnsworth
  • Director of social media
  • Sean Patrick Farrell
  • Senior video producer
  • Klint Finley
  • Contributing writer
  • Catherine Fish
  • Executive director of brand and business development
  • Alyssa Foote
  • Junior art director
  • Robbie Gonzalez
  • Senior writer
  • Lauren Goode
  • Senior writer
  • John Gravois
  • Senior editor
  • Andy Greenberg
  • Senior writer
  • Emma Grey Ellis
  • Staff writer
  • Arthur Guiling
  • West Coast facilities coordinator
  • Caitlin Harrington
  • Contributing research editor
  • Ricki Harris
  • Editorial assistant
  • Lily Hay Newman
  • Staff writer
  • Olman Hernández
  • Video producer and animator
  • Maili Holiman
  • Creative director
  • Beth Holzer
  • Visuals manager
  • Lydia Horne
  • Editorial assistant
  • Rachel Janc
  • PR associate
  • Erica Jewell
  • Managing editor
  • Lauren Joseph
  • Associate photo editor
  • Jason Kehe
  • Senior associate editor
  • Caitlin Kelly
  • Senior editor
  • Junho Kim
  • Video producer
  • Katherine Kirkland
  • Director of brand marketing
  • Ryan Langsdorf
  • Executive chef
  • Issie Lapowsky
  • Senior writer
  • Steven Levy
  • Editor at large
  • Nick Liptak
  • Story editor
  • Ryan Loughlin
  • Video producer
  • Anthony Lydgate
  • Senior editor
  • Aarian Marshall
  • Staff writer
  • Paris Martineau
  • Staff writer
  • Louise Matsakis
  • Staff writer
  • Ryan Meith
  • Production specialist
  • Megan Molteni
  • Staff writer
  • Lauren Murrow
  • Senior editor
  • Justice Namaste
  • Social media coordinator
  • Robert Novick
  • VP of business development and finance
  • Florence Pak
  • Design director
  • Arielle Pardes
  • Senior associate editor
  • Jason Parham
  • Senior writer
  • Joanna Pearlstein
  • Deputy editor, newsroom standards
  • Phuc Pham
  • Photo researcher
  • Saraswati Rathod
  • Contributing research editor
  • L. Paul Robertson
  • Executive director, marketing
  • Mark Robinson
  • Features editor
  • Adam Rogers
  • Deputy editor
  • Scott Rosenfield
  • Site director
  • Peter Rubin
  • Platforms editor
  • Paul Sarconi
  • Social media manager
  • Robbie Sauerberg
  • General manager, advertising
  • Michael Scott Magallanes
  • Cochef
  • Lee Simmons
  • Story editor
  • Matt Simon
  • Senior writer
  • Tom Simonite
  • Senior writer
  • Adrienne So
  • Senior commerce writer
  • Andy Sonnenberg
  • VP of revenue
  • Maria Streshinsky
  • Executive editor
  • Theresa Thadani
  • Digital production artist
  • Scott Thurm
  • Business editor
  • Nitasha Tiku
  • Senior writer
  • Vera Titunik
  • Features editor
  • Annie Trinh Steinhaus
  • Senior business director
  • Sandra Upson
  • Senior editor
  • Sara Urbaez
  • Photo editor, platforms
  • Andrea Valdez
  • Editor of WIRED.com
  • Jeffrey Van Camp
  • Senior commerce writer
  • Emily Waite
  • Designer
  • Alyssa Walker
  • Managing art director
  • Angela Watercutter
  • Senior associate editor
  • Corey Wilson
  • Executive director, communications
  • Wonbo Woo
  • Executive producer

#####EOF##### Forcing Commenters to Use Real Names Won’t Root Out the Trolls | WIRED
Skip Article Header. Skip to: Start of Article.

Forcing Commenters to Use Real Names Won’t Root Out the Trolls

Skip Comments. Skip to: Footer. View comments
#####EOF##### Frequently Asked Questions | WIRED
Skip Article Header. Skip to: Start of Article.

Frequently Asked Questions

#####EOF##### Sign Up for WIRED’s Email Newsletter | WIRED
#####EOF##### Android Security Is Better But Still Has a Long Way to Go | WIRED
Good News: Android’s Huge Security Problem Is Getting Less Huge

Good News: Android’s Huge Security Problem Is Getting Less Huge

Getty Images

Good News: Android’s Huge Security Problem Is Getting Less Huge

Getty Images

First, the good news: Half of all Android devices have gotten fairly recent security updates, patching the hackable flaws that leave users vulnerable to digital crime and espionage. The bad news? The other half hasn't.

In an annual report on the security of the world's 1.4 billion Android devices that Google released today, the company touts the ever-improving state of Android security. Less malware winds up in its Google Play store, devices are better encrypted, and more hackers than ever report Android bugs to Google in exchange for so-called "bug bounties." But Google has also released solid data for the first time on Android's most serious nagging security problem: The challenge of getting dozens of manufacturers and hundreds of carriers around the world to cooperate on regularly patching Android phones and tablets. On that point, the company argues that a 50 percent annual patching rate beats where it's been in the past—but it's still not remotely good enough.

"We're proud of the fact that half of devices received an update in 2016, but that's not sufficient," says Adrian Ludwig, Google's director of Android Security. "We're making the number available, and we think it's an indication of good progress. It doesn't mean we're done."

Insecure Ecosystem

While half of Android devices going unpatched in 2016 represents a glaring security problem, Ludwig says it's nonetheless a milestone; he estimates that twice as many people installed an Android patch in 2016 as did in 2015. And he suggested that number could reach 75 percent in 2017, though he stopped short of describing that increase as an official goal.

Those patching statistics are a mixed bag, says Josh Drake, the researcher for security firm Zimperium, who in 2015 found the so-called Stagefright vulnerability that allowed the takeover of Android phones with only a text message. "If this is really a doubling, that's great," Drake says. "But fifty percent is a terrible number."

When it comes to software upgrades for new features and security patches, Google has long struggled to get anywhere near the high rate of software update adoption that Apple's iOS boasts. Less than three percent of Android phones run the operating system's latest version, Nougat, while nearly 80 percent of iOS devices run Apple's latest version, iOS 10. And Nougat officially launched three weeks before iOS 10.

If Google's patching rate has in fact doubled, that represents an "incredibly positive" improvement, says Rich Smith, head of R&D at the mobile authentication security firm Duo. But he says Google's new data also further illustrates how starkly Android devices have lagged in security updates. The fact that half of devices received an update sometime in 2016 doesn't mean they've received one at all recently, he points out. "When exactly you got the patch can be the difference between being protected from trivial things or really critical things," Smith says.

Smith points to the well-publicized attack on Android phones known as Quadrooter, revealed in the summer of 2016. According to Duo's own data, the security flaws that attack exploited devices have only been patched in about 40 percent of phones in which Duo's authentication app is installed—and that's a more business-focused, North American collection of users than the overall Android user base Google's report measures. "This is an issue that was shouted from the rooftops, the world is on fire, and those updates still haven't happened," Smith says.

Fragment Nation

Android's biggest hurdle to better patching remains the byzantine fragmentation of its operating system. Samsung alone offers 13 models, sold by 200 different carriers, each of which customizes its operating system to different degrees. That results in close to 1,500 variations of every version of the software, says Samsung's mobile security director Henry Lee. "It might seem like we just receive a patch from Google and apply it, but it's actually not that simple," he says. About 60 percent of Samsung users received an update in 2016, Lee says, but about 15 percent use old, unsupported versions of Android, and other 15 percent simply ignore the updates.

Still, thanks to Samsung's market dominance, those percentages result in big numbers. Samsung security updates now reach 400 million devices, spread across hundreds of global carriers. That's a sizable chunk of the just over 1.4 billion total Android users.

The improvement over 2015 shows that some of Google's efforts are working. Google and several of its manufacturing partners, including Samsung and LG, have started pushing out monthly security-specific updates for Android devices that run versions as old as 2013's KitKat. Android's Ludwig says Google has worked to make its updates more seamless and smaller in size. It's pressured carriers to take patching more seriously, and convinced many of them not to count software updates against user data plans. Google also developed so-called "A/B updates" that allow businesses to try out new software, and easily roll it back if it causes compatibility issues with the any critical enterprise software.

Google's Ludwig also emphasizes that Android's security has improved in other important ways. Play Store filters catch more malware than ever, he points out, preventing malicious apps from infecting users' devices. Just .71 percent of users had any sort of malware on their phone in the fourth quarter of 2016, according to Google's report. And while that's up from the half a percent of infected users a year before, the numbers were far better when users only downloaded apps from the Play store: Just .05 percent of those phones were infected with malware in the fourth quarter of 2016, down from .15 percent the year before. (A good reminder to never download software outside of Google Play.)

As optimistic as those malware numbers may seem, they don't account for lower-volume, targeted hacks against sensitive victims. WikiLeaks' recent release of secret CIA files, for instance, revealed dozens of years-old Android hacking techniques no doubt used to stealthily spy on small numbers of individuals not accounted for in Google's statistics. Google declined to comment on whether or when the flaws those CIA hacking techniques exploited might have been fixed.

Which gets back to that original problem. Whether or not Google patches the Vault 7 flaws—or plenty of others hiding in its smartphones' code—as many as half of Android users will still remain as vulnerable as ever.

This story has been updated to include additional data points.

#####EOF##### Clever New GitHub Tool Lets Coders Build Software Like Bridges | WIRED
Clever New GitHub Tool Lets Coders Build Software Like Bridges

Clever New GitHub Tool Lets Coders Build Software Like Bridges

Clever New GitHub Tool Lets Coders Build Software Like Bridges

Getty Images

Jesse Toth says that upgrading an Internet service is like building a new bridge across San Francisco Bay.

In building the new eastern span of the Bay Bridge, engineers didn't tear down the old one and erect the new one in its place. They built the new span alongside the old one, before making sure the new bridge could handle the same traffic. Only then did they switch all the cars over and start tearing down the old span. As Toth explains, when it comes time to rebuild software that underpins a service like Google or Facebook or Uber, the process should work in much the same way. "You battle-test this new bridge—this new code path—while the original one is still being used," she says.

Toth is an engineer at GitHub—the company at the heart of the modern software world—and today, she and her fellow GitHub engineers officially released a tool designed to ensure that your new code is ready before you disconnect your old code—in some cases, very old code. The tool is called Scientist, and as open source software, it's freely available to all. Toth and others believe it could potentially help anyone upgrade even the largest of online services.

"I feel like the scope for this is huge. As soon as you write code, it becomes legacy code. Somebody has to maintain it, and eventually, you will need to change it," Toth says. "It's hard for people to make these changes and feel confident in them."

Do No Harm

Scientist uses a clever engineering technique called Branch By Abstraction. Basically, it wraps your old service in an extra layer of software that handles communication with the outside world, juggling all inputs and outputs. GitHub calls this an abstraction layer, or an experiment. You can then can write your new code to fit the abstraction layer, ensuring that it can handle all the same inputs and outputs. Once this is done and properly tested, you can flip a switch so that the abstraction points not to your old code but to the new.

The trick is that, during the testing phase, the abstraction layer can run the old code and the new in parallel. The same live data streams into both systems, and Scientist records any differences in behavior. Using this data, you can make any needed tweaks to your new code. "It guides code changes," Toth explains. "It makes sure they are done safely and that they're not destroying what's there already or introducing new bugs."

Toth and her colleagues built Scientist as a way of upgrading GitHub's own service, an online code repository that has become the world's primary means of sharing and editing open source software. They rewrote the permissions system that controls access to the service's thousands of repositories, and with Scientist, they could not only build the system on the fly but properly test it before it went live. "We were having trouble rewriting and replacing the code in a way we felt was safe," Toth says. "With thousands of repositories, testing whether one small change or one thing that breaks it was really hard to do."

Old Code Is Everywhere

Scientist is designed to work with Ruby, the programming language that underpins GitHub. But according to Toth, the same ideas can be applied to any other language—or even help you move a service from one language to another. She envisions aging banks using it to upgrade decades-old Fortran code to Ruby or any other modern language.

Nate Holland, an engineer with software company SpiceWorks, has used a earlier version of Scientist, and he calls it "an immensely useful way to do otherwise dangerous refactors in a relatively safe and controlled manner." That said, he points out that it is far more useful when you're upgrading much older pieces of code, as opposed to code that was built in recent years with more modern tools. "There aren't as many knots or twists that may be dangerous," he says.

But much like Toth, he sees Scientist as "an elegant, abstract concept" that could be quickly applied to any language. And in a world ever-more dependent on code, much older pieces of code are still everywhere.

#####EOF##### This 'Demonically Clever' Backdoor Hides In a Tiny Slice of a Computer Chip | WIRED
This 'Demonically Clever' Backdoor Hides In a Tiny Slice of a Computer Chip

This 'Demonically Clever' Backdoor Hides In a Tiny Slice of a Computer Chip

Getty Images

This 'Demonically Clever' Backdoor Hides In a Tiny Slice of a Computer Chip

Getty Images

Security flaws in software can be tough to find. Purposefully planted ones—hidden backdoors created by spies or saboteurs—are often even stealthier. Now imagine a backdoor planted not in an application, or deep in an operating system, but even deeper, in the hardware of the processor that runs a computer. And now imagine that silicon backdoor is invisible not only to the computer’s software, but even to the chip’s designer, who has no idea that it was added by the chip’s manufacturer, likely in some farflung Chinese factory. And that it’s a single component hidden among hundreds of millions or billions. And that each one of those components is less than a thousandth of the width of a human hair.

In fact, researchers at the University of Michigan haven't just imagined that computer security nightmare; they've built and proved it works. In a study that won the “best paper” award at last week’s IEEE Symposium on Privacy and Security, they detailed the creation of an insidious, microscopic hardware backdoor proof-of-concept. And they showed that by running a series of seemingly innocuous commands on their minutely sabotaged processor, a hacker could reliably trigger a feature of the chip that gives them full access to the operating system. Most disturbingly, they write, that microscopic hardware backdoor wouldn't be caught by practically any modern method of hardware security analysis, and could be planted by a single employee of a chip factory.

"Detecting this with current techniques would be very, very challenging if not impossible," says Todd Austin, one of the computer science professors at the University of Michigan who led the research. "It's a needle in a mountain-sized haystack." Or as Google engineer Yonatan Zunger wrote after reading the paper: "This is the most demonically clever computer security attack I've seen in years."

Analog Attack

The "demonically clever" feature of the Michigan researchers' backdoor isn't just its size, or that it's hidden in hardware rather than software. It's that it violates the security industry's most basic assumptions about a chip's digital functions and how they might be sabotaged. Instead of a mere change to the "digital" properties of a chip—a tweak to the chip's logical computing functions—the researchers describe their backdoor as an "analog" one: a physical hack that takes advantage of how the actual electricity flowing through the chip's transistors can be hijacked to trigger an unexpected outcome. Hence the backdoor's name: A2, which stands for both Ann Arbor, the city where the University of Michigan is based, and "Analog Attack."

Here's how that analog hack works: After the chip is fully designed and ready to be fabricated, a saboteur adds a single component to its "mask," the blueprint that governs its layout. That single component or "cell"—of which there are hundreds of millions or even billions on a modern chip—is made out of the same basic building blocks as the rest of the processor: wires and transistors that act as the on-or-off switches that govern the chip's logical functions. But this cell is secretly designed to act as a capacitor, a component that temporarily stores electric charge.

University of Michigan

Every time a malicious program—say, a script on a website you visit—runs a certain, obscure command, that capacitor cell "steals" a tiny amount of electric charge and stores it in the cell's wires without otherwise affecting the chip's functions. With every repetition of that command, the capacitor gains a little more charge. Only after the "trigger" command is sent many thousands of times does that charge hit a threshold where the cell switches on a logical function in the processor to give a malicious program the full operating system access it wasn't intended to have. "It takes an attacker doing these strange, infrequent events in high frequency for a duration of time," says Austin. "And then finally the system shifts into a privileged state that lets the attacker do whatever they want."

That capacitor-based trigger design means it's nearly impossible for anyone testing the chip's security to stumble on the long, obscure series of commands to "open" the backdoor. And over time, the capacitor also leaks out its charge again, closing the backdoor so that it's even harder for any auditor to find the vulnerability.

New Rules

Processor-level backdoors have been proposed before. But by building a backdoor that exploits the unintended physical properties of a chip's components—their ability to "accidentally" accumulate and leak small amounts of charge—rather than their intended logical function, the researchers say their backdoor component can be a thousandth the size of previous attempts. And it would be far harder to detect with existing techniques like visual analysis of a chip or measuring its power use to spot anomalies. "We take advantage of these rules 'outside of the Matrix' to perform a trick that would [otherwise] be very expensive and obvious," says Matthew Hicks, another of the University of Michigan researchers. "By following that different set of rules, we implement a much more stealthy attack."

The Michigan researchers went so far as to build their A2 backdoor into a simple open-source OR1200 processor to test out their attack. Since the backdoor mechanism depends on the physical characteristics of the chip's wiring, they even tried their "trigger" sequence after heating or cooling the chip to a range of temperatures, from negative 13 degrees to 212 degrees Fahrenheit, and found that it still worked in every case.

University of Michigan

As dangerous as their invention sounds for the future of computer security, the Michigan researchers insist that their intention is to prevent such undetectable hardware backdoors, not to enable them. They say it's very possible, in fact, that governments around the world may have already thought of their analog attack method. "By publishing this paper we can say it’s a real, imminent threat," says Hicks. "Now we need to find a defense."

But given that current defenses against detecting processor-level backdoors wouldn't spot their A2 attack, they argue that a new method is required: Specifically, they say that modern chips need to have a trusted component that constantly checks that programs haven't been granted inappropriate operating-system-level privileges. Ensuring the security of that component, perhaps by building it in secure facilities or making sure the design isn't tampered with before fabrication, would be far easier than ensuring the same level of trust for the entire chip.

They admit that implementing their fix could take time and money. But without it, their proof-of-concept is intended to show how deeply and undetectably a computer's security could be corrupted before it's ever sold. "I want this paper to start a dialogue between designers and fabricators about how we establish trust in our manufactured hardware," says Austin. "We need to establish trust in our manufacturing, or something very bad will happen."

Here's the Michigan researchers' full paper:

#####EOF##### JavaScript Conquered the Web. Now It’s Taking Over the Desktop | WIRED
Skip Article Header. Skip to: Start of Article.

JavaScript Conquered the Web. Now It’s Taking Over the Desktop

Skip Comments. Skip to: Footer. View comments
#####EOF##### Surveillance Kills Freedom By Killing Experimentation | WIRED
Surveillance Kills Freedom By Killing Experimentation

Surveillance Kills Freedom By Killing Experimentation

Excerpted from the upcoming issue of McSweeney's, "The End of Trust," a collection featuring more than 30 writers investigating surveillance, technology, and privacy.
NASA

Surveillance Kills Freedom By Killing Experimentation

Excerpted from the upcoming issue of McSweeney's, "The End of Trust," a collection featuring more than 30 writers investigating surveillance, technology, and privacy.
NASA

In my book Data and Goliath, I write about the value of privacy. I talk about how it is essential for political liberty and justice, and for commercial fairness and equality. I talk about how it increases personal freedom and individual autonomy, and how the lack of it makes us all less secure. But this is probably the most important argument as to why society as a whole must protect privacy: it allows society to progress.

We know that surveillance has a chilling effect on freedom. People change their behavior when they live their lives under surveillance. They are less likely to speak freely and act individually. They self-censor. They become conformist. This is obviously true for government surveillance, but is true for corporate surveillance as well. We simply aren’t as willing to be our individual selves when others are watching.

Bruce Schneier is an internationally renowned security technologist. He teaches at the Harvard Kennedy School, and serves as special advisor to IBM Security. His new book is called Click Here to Kill Everybody: Security and Survival in a Hyper-Connected World.

Let’s take an example: hearing that parents and children are being separated as they cross the U.S. border, you want to learn more. You visit the website of an international immigrants’ rights group, a fact that is available to the government through mass internet surveillance. You sign up for the group’s mailing list, another fact that is potentially available to the government. The group then calls or emails to invite you to a local meeting. Same. Your license plates can be collected as you drive to the meeting; your face can be scanned and identified as you walk into and out of the meeting. If instead of visiting the website you visit the group’s Facebook page, Facebook knows that you did and that feeds into its profile of you, available to advertisers and political activists alike. Ditto if you like their page, share a link with your friends, or just post about the issue.

Maybe you are an immigrant yourself, documented or not. Or maybe some of your family is. Or maybe you have friends or coworkers who are. How likely are you to get involved if you know that your interest and concern can be gathered and used by government and corporate actors? What if the issue you are interested in is pro- or anti-gun control, anti-police violence or in support of the police? Does that make a difference?

Maybe the issue doesn’t matter, and you would never be afraid to be identified and tracked based on your political or social interests. But even if you are so fearless, you probably know someone who has more to lose, and thus more to fear, from their personal, sexual, or political beliefs being exposed.

This isn’t just hypothetical. In the months and years after the 9/11 terrorist attacks, many of us censored what we spoke about on social media or what we searched on the internet. We know from a 2013 PEN study that writers in the United States self-censored their browsing habits out of fear the government was watching. And this isn’t exclusively an American event; internet self-censorship is prevalent across the globe, China being a prime example.

It’s easy to imagine the more conservative among us getting enough power to make illegal what they would otherwise be forced to witness. In this way, privacy helps protect the rights of the minority from the tyranny of the majority.

Ultimately, this fear stagnates society in two ways. The first is that the presence of surveillance means society cannot experiment with new things without fear of reprisal, and that means those experiments—if found to be inoffensive or even essential to society—cannot slowly become commonplace, moral, and then legal. If surveillance nips that process in the bud, change never happens. All social progress—from ending slavery to fighting for women’s rights—began as ideas that were, quite literally, dangerous to assert. Yet without the ability to safely develop, discuss, and eventually act on those assertions, our society would not have been able to further its democratic values in the way that it has.

Consider the decades-long fight for gay rights around the world. Within our lifetimes we have made enormous strides to combat homophobia and increase acceptance of queer folks’ right to marry. Queer relationships slowly progressed from being viewed as immoral and illegal, to being viewed as somewhat moral and tolerated, to finally being accepted as moral and legal.

In the end it was the public nature of those activities that eventually slayed the bigoted beast, but the ability to act in private was essential in the beginning for the early experimentation, community building, and organizing.

Marijuana legalization is going through the same process: it’s currently sitting between somewhat moral, and—depending on the state or country in question—tolerated and legal. But, again, for this to have happened, someone decades ago had to try pot and realize that it wasn’t really harmful, either to themselves or to those around them. Then it had to become a counterculture, and finally a social and political movement. If pervasive surveillance meant that those early pot smokers would have been arrested for doing something illegal, the movement would have been squashed before inception. Of course the story is more complicated than that, but the ability for members of society to privately smoke weed was essential for putting it on the path to legalization.

We don’t yet know which subversive ideas and illegal acts of today will become political causes and positive social change tomorrow, but they’re around. And they require privacy to germinate. Take away that privacy, and we’ll have a much harder time breaking down our inherited moral assumptions.

The second way surveillance hurts our democratic values is that it encourages society to make more things illegal. Consider the things you do—the different things each of us does—that portions of society find immoral. Not just recreational drugs and gay sex, but gambling, dancing, public displays of affection. All of us do things that are deemed immoral by some groups, but are not illegal because they don’t harm anyone. But it’s important that these things can be done out of the disapproving gaze of those who would otherwise rally against such practices.

If there is no privacy, there will be pressure to change. Some people will recognize that their morality isn’t necessarily the morality of everyone—and that that’s okay. But others will start demanding legislative change, or using less legal and more violent means, to force others to match their idea of morality.

It’s easy to imagine the more conservative (in the small-c sense, not in the sense of the named political party) among us getting enough power to make illegal what they would otherwise be forced to witness. In this way, privacy helps protect the rights of the minority from the tyranny of the majority.

This is how we got Prohibition in the 1920s, and if we had had today’s surveillance capabilities in the 1920s it would have been far more effectively enforced. Recipes for making your own spirits would have been much harder to distribute. Speakeasies would have been impossible to keep secret. The criminal trade in illegal alcohol would also have been more effectively suppressed. There would have been less discussion about the harms of Prohibition, less “what if we didn’t…” thinking. Political organizing might have been difficult. In that world, the law might have stuck to this day.

China serves as a cautionary tale. The country has long been a world leader in the ubiquitous surveillance of its citizens, with the goal not of crime prevention but of social control. They are about to further enhance their system, giving every citizen a “social credit” rating. The details are yet unclear, but the general concept is that people will be rated based on their activities, both online and off. Their political comments, their friends and associates, and everything else will be assessed and scored. Those who are conforming, obedient, and apolitical will be given high scores. People without those scores will be denied privileges like access to certain schools and foreign travel. If the program is half as far-reaching as early reports indicate, the subsequent pressure to conform will be enormous. This social surveillance system is precisely the sort of surveillance designed to maintain the status quo.

For social norms to change, people need to deviate from these inherited norms. People need the space to try alternate ways of living without risking arrest or social ostracization. People need to be able to read critiques of those norms without anyone’s knowledge, discuss them without their opinions being recorded, and write about their experiences without their names attached to their words. People need to be able to do things that others find distasteful, or even immoral. The minority needs protection from the tyranny of the majority.

Privacy makes all of this possible. Privacy encourages social progress by giving the few room to experiment free from the watchful eye of the many. Even if you are not personally chilled by ubiquitous surveillance, the society you live in is, and the personal costs are unequivocal.


From The End of Trust (McSweeney’s 54), out November 20th, a collection featuring over thirty writers investigating surveillance, technology, and privacy, with special advisors The Electronic Frontier Foundation. Wired readers can take 10% off the issue, or a full subscription, with the code WIRED.


More Great WIRED Stories

#####EOF##### Bitcoin Mining Has a Massive Carbon Footprint | WIRED
Bitcoin Mining Guzzles Energy—And Its Carbon Footprint Just Keeps Growing

Bitcoin Mining Guzzles Energy—And Its Carbon Footprint Just Keeps Growing

Paul Ratje/The Washington Post/Getty Images

Bitcoin Mining Guzzles Energy—And Its Carbon Footprint Just Keeps Growing

Paul Ratje/The Washington Post/Getty Images

This story originally appeared on Grist and is part of the Climate Desk collaboration.

If you’re like me, you’ve probably been ignoring the bitcoin phenomenon for years — because it seemed too complex, far-fetched, or maybe even too libertarian. But if you have any interest in a future where the world moves beyond fossil fuels, you and I should both start paying attention now.

Last week, the value of a single bitcoin broke the $10,000 barrier for the first time. Over the weekend, the price nearly hit $12,000. At the beginning of this year, it was less than $1,000.

If you had bought $100 in bitcoin back in 2011, your investment would be worth nearly $4 million today. All over the internet there are stories of people who treated their friends to lunch a few years ago and, as a novelty, paid with bitcoin. Those same people are now realizing that if they’d just paid in cash and held onto their digital currency, they’d now have enough money to buy a house.

That sort of precipitous rise is stunning, of course, but bitcoin wasn’t intended to be an investment instrument. Its creators envisioned it as a replacement for money itself—a decentralized, secure, anonymous method for transferring value between people.

But what they might not have accounted for is how much of an energy suck the computer network behind bitcoin could one day become. Simply put, bitcoin is slowing the effort to achieve a rapid transition away from fossil fuels. What’s more, this is just the beginning. Given its rapidly growing climate footprint, bitcoin is a malignant development, and it’s getting worse.

Cryptocurrencies like bitcoin provide a unique service: Financial transactions that don’t require governments to issue currency or banks to process payments. Writing in the Atlantic, Derek Thompson calls bitcoin an “ingenious and potentially transformative technology” that the entire economy could be built on — the currency equivalent of the internet. Some are even speculating that bitcoin could someday make the US dollar obsolete.

But the rise of bitcoin is also happening at a specific moment in history: Humanity is decades behind schedule on counteracting climate change, and every action in this era should be evaluated on its net impact on the climate. Increasingly, bitcoin is failing the test.

Digital financial transactions come with a real-world price: The tremendous growth of cryptocurrencies has created an exponential demand for computing power. As bitcoin grows, the math problems computers must solve to make more bitcoin (a process called “mining”) get more and more difficult—a wrinkle designed to control the currency’s supply.

Today, each bitcoin transaction requires the same amount of energy used to power nine homes in the US for one day. And miners are constantly installing more and faster computers. Already, the aggregate computing power of the bitcoin network is nearly 100,000 times larger than the world’s 500 fastest supercomputers combined.

The total energy use of this web of hardware is huge—an estimated 31 terawatt-hours per year. More than 150 individual countries in the world consume less energy annually. And that power-hungry network is currently increasing its energy use every day by about 450 gigawatt-hours, roughly the same amount of electricity the entire country of Haiti uses in a year.

That sort of electricity use is pulling energy from grids all over the world, where it could be charging electric vehicles and powering homes, to bitcoin-mining farms. In Venezuela, where rampant hyperinflation and subsidized electricity has led to a boom in bitcoin mining, rogue operations are now occasionally causing blackouts across the country. The world’s largest bitcoin mines are in China, where they siphon energy from huge hydroelectric dams, some of the cheapest sources of carbon-free energy in the world. One enterprising Tesla owner even attempted to rig up a mining operation in his car, to make use of free electricity at a public charging station.

In just a few months from now, at bitcoin’s current growth rate, the electricity demanded by the cryptocurrency network will start to outstrip what’s available, requiring new energy-generating plants. And with the climate conscious racing to replace fossil fuel-base plants with renewable energy sources, new stress on the grid means more facilities using dirty technologies. By July 2019, the bitcoin network will require more electricity than the entire United States currently uses. By February 2020, it will use as much electricity as the entire world does today.

This is an unsustainable trajectory. It simply can’t continue.

There are already several efforts underway to reform how the bitcoin network processes transactions, with the hope that it’ll one day require less electricity to make new coins. But as with other technological advances like irrigation in agriculture and outdoor LED lighting, more efficient systems for mining bitcoin could have the effect of attracting thousands of new miners.

It’s certain that the increasing energy burden of bitcoin transactions will divert progress from electrifying the world and reducing global carbon emissions. In fact, I’d guess it probably already has. The only question at this point is: by how much?

#####EOF##### Restoring This WWII B-29 Bomber Has Taken 300K Hours So Far | WIRED
Restoring This WWII B-29 Bomber Has Taken 300K Hours So Far

Restoring This WWII B-29 Bomber Has Taken 300K Hours So Far

Restoring This WWII B-29 Bomber Has Taken 300K Hours So Far

This summer, if the dreams of a nonprofit group in Wichita, Kansas, come true, two World War II-­era B­-29 Superfortress bombers will fly together for the first time in a half ­century. Doc, originally one of a squadron of eight airplanes named for Snow White and the seven dwarfs, will finally take off and join Fifi, which has been flying since 1974. It's an unlikely event that almost didn't happen—two relics, loud and slow, each of them powered by four big finicky radial engines, restored and maintained by hundreds of volunteers. Together, they'll be an impressive sight, their polished aluminum skins gleaming in the sun, their long slender wings stretching 140 feet tip to tip, living ambassadors from the distant past.

"There was a good chance this airplane was never going to fly again," says Jim Murphy, leader of the restoration effort for Doc's Friends. "We weren't going to let that happen." The airplane, built in 1944, was decommissioned after serving in the Korean War, then used for target practice in the California desert. The bomber's technology was outdated. It was slow. Its military usefulness was gone. But a group of historians who dreamed to see the big airplane fly again rescued it in 1987, and in 2000, Doc was trucked to Wichita for restoration.

Murphy plans to roll the airplane out of the hangar soon, and start taxi and flight testing in the spring. "We're going to try hard to fly to Oshkosh [Wisconsin] in July," he says, where Doc and Fifi could finally meet. The two crews plan to fly together above the crowds at EAA AirVenture, the biggest air show in the world. The formation, though small, will evoke the memory of a sky full of the bombers, 1,000 at a time, flying above Tokyo in the final days of World War II.

When the B-29 was designed by Boeing in 1939, it was a technological powerhouse. The guns could be fired by remote control using computerized sights. The crew areas were pressurized, so the men could tolerate long missions at altitudes above 18,000 feet. Eight turrets housed machine guns, and some versions carried a 20mm cannon beneath the tail. The cockpit instruments and radar gear were accurate enough to help the crews aim at targets through cloud layers and at night. Nearly 4,000 were built. The Enola Gay, whose crew dropped the atomic bomb on Hiroshima, was a B-­29.

A US Air Force Boeing B-29 Superfortress bomber flying above the clouds and mountains, mid 1940s.

Underwood Archives/Getty Images

"Most of Doc's parts are exact copies of the original parts," says Murphy, "but the engines have been upgraded. The original engines had lots and lots of problems." The front-­row cylinders on the radials exhausted to the front, he says, causing overheating and fires. "Those engines were the most unreliable part of the airplane," says Murphy. "Fifi had already converted to a modified hybrid engine design that combines the original front end with the back end of an engine off an old Sky Raider, and adds 1,000 horsepower. We'll use that same modification, but the engines will look and sound just the same as the originals."

That look and sound is important to the few remaining veterans who still remember their WWII missions. "Last summer, we got a call from the 73rd Bomb Wing—they wanted to hold their final reunion in the hangar here with Doc," says Murphy. "Listening to those guys and the stories they told, it was a day I'll never forget. One guy had been shot down three times. Another was a gunner, and he'd been shot in the face—he lost his nose and part of an eye—and he only missed one mission. Those guys could have come home after 20 missions, but they all flew 35 or 40. 'We went over to win, not to go home,' they said. All the stories—it was like it was yesterday, when those guys saw the airplane."

The restoration's not done yet. "The airplane is still up on jacks. We're finishing up the gear doors and we should have those ready this week, then we'll be ready to test the gear. Then we'll come down off the jacks for the last time. We've got to do the finishing touches on the avionics, then we'll just be waiting for weather," Murphy says.

Once Doc is up and flying this summer, Murphy will face the next challenge—how to recruit and train the next generation of volunteers to keep the airplane in the air. It takes a crew of six to fly Doc: two pilots, a flight engineer, and three observers to monitor the flaps and gear and all the other moving parts. Dozens more are needed to maintain and provide support for the big bomber. Most of the current crew are retired workers from Boeing, including a few in their 90s who were there when the original fleet was built. "We've logged nearly 300,000 volunteer hours on this project," says Murphy. "The first time Doc takes to the air, there'll be a big celebration." With any luck, that day is coming up soon.

All images courtesy Doc's Friends via Flickr, unless otherwise noted.

#####EOF##### WIRED Magazine Subscription

DELIVER TO

This is a gift

CUSTOMIZE YOUR ORDER

BEST OFFER Print + Digital Access
1 year for $10 + International shipping
  • Unlimited ad-free browsing on WIRED.com

  • Print subscription

  • Digital edition of the magazine

Digital Only Access
1 year for $10
  • Unlimited ad-free browsing on WIRED.com

  • Digital edition of the magazine

Express Checkout:

Your credit/debit card will be billed now and automatically before each renewal (we will send a reminder with the term and rate then in effect). If you do nothing, you will be charged or invoiced. You may cancel at any time by contacting customer service at the information below.

Buy with

Subscriber’s Automatic Renewal Feature:

Your subscription will be automatically renewed unless you tell us to stop. Before the start of each renewal, you will be sent a reminder notice stating the term and rate then in effect (current rates listed below). If you do nothing, your credit/debit card or payment account will be charged or you will be sent an invoice for your subscription. You may cancel at any time during your subscription by contacting customer service at the information below, and receive a full refund for all unserved issues.

Place Order
This transaction is secured with SSL encryption
processing

ENTER YOUR INFORMATION

Please enter first name
Please enter last name
Please enter a valid e-mail Please enter a valid e-mail.
Will be used in accordance with our privacy policy
Please select a country
Please enter a valid address
Please enter city
Please select state
Please enter code

GIFT RECIPIENT

Please enter first name
Please enter last name
Please enter a valid e-mail Please enter a valid e-mail
Will be used in accordance with our privacy policy

SHIPPING ADDRESS

Please enter a valid address
Please enter city
Please select state
Please enter code

SPECIAL GIFT OFFER

Yes, I would like to give one year of Print and Digital
access to WIRED
for just $!

YOUR PAYMENT DETAILS

Select Month
Select Year
Enter Valid CVV

ORDER SUMMARY

You may cancel at any time.

$0.00
Wired Gift Print + Digital Access
$10.00

Shipping & Handling

$0.00

Total billed

$0.00
+ sales tax

Subscriber's Automatic Renewal Feature

Your subscription will be automatically renewed unless you tell us to stop. Before the start of each renewal, you will be sent a reminder notice stating the term and rate then in effect (current rates listed below). If you do nothing, your credit/debit card or payment account will be charged or you will be sent an invoice for your subscription. You may cancel at any time by contacting customer service at the information below, and receive a full refund for all unserved issues. Your subscription will be automatically renewed unless you tell us to stop. Before the start of each renewal, you will be sent a reminder notice stating the term and rate then in effect (current rates listed below). If you do nothing, your credit/debit card or payment account will be charged or you will be sent an invoice for your subscription. You may cancel at any time by contacting customer service at the information below, and receive a full refund for all unserved issues. Your subscription will be automatically renewed unless you tell us to stop. Before the start of each renewal, you will be sent a reminder notice stating the term and rate then in effect (current rates listed below). If you do nothing, your credit/debit card or payment account will be charged or you will be sent an invoice for your subscription. You may cancel at any time by contacting customer service at the information below, and receive a full refund for all unserved issues.

Place Order

Padlock This transaction is secured with SSL encryption

Need help? Visit wired.com/customerservice or contact us 1-800-405-8085.

Back
To Top
Wired
#####EOF##### You Can Now Run Some Code Hosted on GitHub | WIRED
You Can Now Run Some Code Hosted on GitHub

You Can Now Run Some Code Hosted on GitHub

Le Tigre

Since launching in 2008, GitHub has become by far the largest place on the internet for hosting and collaborating on software code. The company, which is in the process of being acquired by Microsoft, now hosts more than 85 million projects, and boasts 31 million monthly users.

But while you've been able to store your code on GitHub, you couldn't actually run it. For that you needed a web server or a cloud service. But today at its annual GitHub Universe event, the company announced that will now enable programmers to run certain types of software on its platform.

The company's new offering, GitHub Actions, is designed to help developers automate the various tasks involved in managing their code, such as testing and technical support. GitHub head of platform Sam Lambert says the company's users often write their own software and bots to handle tasks like automatically running a test when someone updates code or sending a text message to an on-call team member when someone submits a bug report. That requires running a separate server to handle these tasks, and, ultimately, more work writing and maintaining these sorts of support tools.

GitHub could try to offer these types of automation tools itself, but it couldn’t meet everyone’s needs, because different development teams have different requirements. Instead, it's letting developers build their own tools from within GitHub.

Lambert describes GitHub Actions as being a bit like the consumer service IFTTT ("if this, then that"), which enables users to run certain actions (like posting a photo to Twitter) based on specific triggers (such as the appearance of a photo on your Instagram feed). With GitHub Actions, a development team can link a particular trigger (new code being uploaded to a project) to a particular action (running a series of tests). Users can also write more complex workflows as code. For example, you could configure four separate actions to run simultaneously, and a fifth action to wait until all four have completed before triggering.

The new service launches in beta today with 450 prebuilt "actions," and will enable users to write their own actions as well, or bundle existing applications using the popular tool Docker to run on GitHub. For example, HashiCorp has built a version of its computing infrastructure tool Terraform that can run as a GitHub action.

Lambert says one big benefit of GitHub Actions is that teams will be able to codify and share work flows. That means that when it comes time to start a new project, a team could use an “off the shelf” workflow and customize to its own needs, rather than having to set up code-management tools from scratch.

It's hard not to wonder if this is a way for GitHub to start muscling in on Microsoft's competitors in the cloud computing market. But Lambert says the service has been in the works for more than a year, well before Microsoft's acquisition of GitHub was announced. And he doesn't see GitHub Actions as a competitor to cloud computing services. GitHub Actions are only able to run for an hour at a time, and the company has imposed other limits to keep them from being used as a public facing web server. The idea is simply to run tools that developers use to write software, and not the final products those developers create.

Lambert admits that it's possible that some GitHub users might find a way to run publicly facing web services from GitHub Actions, but says it won't be an ideal way to do so. In fact, one of the main uses for GitHub Actions could be pushing code for those final projects from GitHub to run on cloud services such as Amazon, Google, and, yes, Microsoft Azure.


More Great WIRED Stories

#####EOF##### In Praise of Security Theater | WIRED
In Praise of Security Theater

In Praise of Security Theater

In Praise of Security Theater

While visiting some friends and their new baby in the hospital last week, I noticed an interesting bit of security. To prevent infant abduction, all babies had RFID tags attached to their ankles by a bracelet. There are sensors on the doors to the maternity ward, and if a baby passes through, an alarm goes off.

Infant abduction is rare, but still a risk. In the last 22 years, about 233 such abductions have occurred in the United States. About 4 million babies are born each year, which means that a baby has a 1-in-375,000 chance of being abducted. Compare this with the infant mortality rate in the U.S. – one in 145 – and it becomes clear where the real risks are.

And the 1-in-375,000 chance is not today's risk. Infant abduction rates have plummeted in recent years, mostly due to education programs at hospitals.

So why are hospitals bothering with RFID bracelets? I think they're primarily to reassure the mothers. Many times during my friends' stay at the hospital the doctors had to take the baby away for this or that test. Millions of years of evolution have forged a strong bond between new parents and new baby; the RFID bracelets are a low-cost way to ensure that the parents are more relaxed when their baby was out of their sight.

Security is both a reality and a feeling. The reality of security is mathematical, based on the probability of different risks and the effectiveness of different countermeasures. We know the infant abduction rates and how well the bracelets reduce those rates. We also know the cost of the bracelets, and can thus calculate whether they're a cost-effective security measure or not. But security is also a feeling, based on individual psychological reactions to both the risks and the countermeasures. And the two things are different: You can be secure even though you don't feel secure, and you can feel secure even though you're not really secure.

The RFID bracelets are what I've come to call security theater: security primarily designed to make you feel more secure. I've regularly maligned security theater as a waste, but it's not always, and not entirely, so.

It's only a waste if you consider the reality of security exclusively. There are times when people feel less secure than they actually are. In those cases – like with mothers and the threat of baby abduction – a palliative countermeasure that primarily increases the feeling of security is just what the doctor ordered.

Tamper-resistant packaging for over-the-counter drugs started to appear in the '80s, in response to some highly publicized poisonings. As a countermeasure, it's largely security theater. It's easy to poison many foods and over-the-counter medicines right through the seal – with a syringe, for example – or to open and replace the seal well enough that an unwary consumer won't detect it. But in the '80s, there was a widespread fear of random poisonings in over-the-counter medicines, and tamper-resistant packaging brought people's perceptions of the risk more in line with the actual risk: minimal.

Much of the post-9/11 security can be explained by this as well. I've often talked about the National Guard troops in airports right after the terrorist attacks, and the fact that they had no bullets in their guns. As a security countermeasure, it made little sense for them to be there. They didn't have the training necessary to improve security at the checkpoints, or even to be another useful pair of eyes. But to reassure a jittery public that it's OK to fly, it was probably the right thing to do.

Security theater also addresses the ancillary risk of lawsuits. Lawsuits are ultimately decided by juries, or settled because of the threat of jury trial, and juries are going to decide cases based on their feelings as well as the facts. It's not enough for a hospital to point to infant abduction rates and rightly claim that RFID bracelets aren't worth it; the other side is going to put a weeping mother on the stand and make an emotional argument. In these cases, security theater provides real security against the legal threat.

Like real security, security theater has a cost. It can cost money, time, concentration, freedoms and so on. It can come at the cost of reducing the things we can do. Most of the time security theater is a bad trade-off, because the costs far outweigh the benefits. But there are instances when a little bit of security theater makes sense.

We make smart security trade-offs – and by this I mean trade-offs for genuine security – when our feeling of security closely matches the reality. When the two are out of alignment, we get security wrong. Security theater is no substitute for security reality, but, used correctly, security theater can be a way of raising our feeling of security so that it more closely matches the reality of security. It makes us feel more secure handing our babies off to doctors and nurses, buying over-the-counter medicines and flying on airplanes – closer to how secure we should feel if we had all the facts and did the math correctly.

Of course, too much security theater and our feeling of security becomes greater than the reality, which is also bad. And others – politicians, corporations and so on – can use security theater to make us feel more secure without doing the hard work of actually making us secure. That's the usual way security theater is used, and why I so often malign it.

But to write off security theater completely is to ignore the feeling of security. And as long as people are involved with security trade-offs, that's never going to work.

- - -

Bruce Schneier is the CTO of BT Counterpane and the author of Beyond Fear: Thinking Sensibly About Security in an Uncertain World. You can contact him through his website. This week's column is dedicated to his new godson, Nicholas Quillen Perry.

#####EOF##### A One-Minute Attack Let Hackers Spoof Hotel Master Keys | WIRED
A One-Minute Attack Let Hackers Spoof Hotel Master Keys

A One-Minute Attack Let Hackers Spoof Hotel Master Keys

David Sacks/Getty Images

A One-Minute Attack Let Hackers Spoof Hotel Master Keys

David Sacks/Getty Images

In 2003, Finnish security researcher Tomi Tuominen was attending a security conference in Berlin when a friend's laptop, containing sensitive data, was stolen from his hotel room. The theft was a mystery: The staff of the upscale Alexanderplatz Radisson had no clues to offer, the door showed no signs of forced entry, and the electronic log of the door's keycard lock—a common RFID card reader sold by Vingcard—had recorded no entries other than the hotel staff.

The disappearing laptop was never explained. But Tuominen and his colleague at F-Secure, Timo Hirvonen, couldn't let go of the possibility that Vingcard's locks contained a vulnerability that would let someone slip past a hotel room's electronically secured bolt. And they'd spend roughly the next decade and a half proving it.

Master Key

At the Infiltrate conference in Miami later this week, Tuominen and Hirvonen plan to present a technique they've found to not simply clone the keycard RFID codes used by Vingcard's Vision locks, but to create a master key that can open any room in a hotel.

With a $300 Proxmark RFID card reading and writing tool, any expired keycard pulled from the trash of a target hotel, and a set of cryptographic tricks developed over close to 15 years of on-and-off analysis of the codes Vingcard electronically writes to its keycards, they found a method to vastly narrow down a hotel's possible master key code. They can use that handheld Proxmark device to cycle through all the remaining possible codes on any lock at the hotel, identify the correct one in about 20 tries, and then write that master code to a card that gives the hacker free reign to roam any room in the building. The whole process takes about a minute.

F-Secure

"Basically it blinks red a few times, and then it blinks green," says Tuominen. "Then we have a master key for the whole facility."

'There's a good chance that not all the hotels have fixed this.'

Tomi Tuominen, F-Secure

The two researchers say that their attack works only on Vingcard's previous-generation Vision locks, not the company's newer Visionline product. But they estimate that it nonetheless affects 140,000 hotels in more than 160 countries around the world; the researchers say that Vingcard's Swedish parent company, Assa Abloy, admitted to them that the problem affects millions of locks in total. When WIRED reached out to Assa Abloy, however, the company put the total number of vulnerable locks somewhat lower, between 500,000 and a million. They note, though, that the total number is tough to measure, since they can't closely track how many of the older locks have been replaced. Tuominen and Hirvonen say that they've collected more than a thousand hotel keycards from their friends over the last 10 years, and found that roughly 30 percent were Vingcard Vision locks that would have been vulnerable to their attack.

Tuominen and Hirvonen quietly alerted Assa Abloy to their findings a year ago, and the company responded in February with a software security update that has since been available on its website. But since Vingcard's locks don't have internet connections, that software has to be installed manually by a technician, lock by lock. "There's a good chance that not all the hotels have fixed this," Tuominen says.

The researchers demonstrate their attack in this video, where they show they can use their Proxmark tool to access restricted floors on a hotel elevator.

In a phone call with WIRED, Assa Abloy's hospitality business unit head Christophe Sut downplayed the risk to hotel guests, and noted that F-Secure's researchers needed years of reverse-engineering work and expertise to develop their lock-hacking technique. But he urged hotels who use the Vingcard Vision locks to install the upgrade. "This is the new normal. If you have software you need to upgrade it all the time," Sut says. "We upgrade our phones and computers. We need to upgrade locks as well."

Narrowing the Field

Tuominen and Hirvonen say they're not releasing all the details of the vulnerabilities in Vingcard's locks for fear of helping burglars or spies break into rooms. Six years ago, by contrast, a security researcher published the code necessary to exploit a glaring vulnerability in widely used Onity keycard locks on the web. That revelation led to a cross-country burglary spree that hit as many as a hundred hotel rooms.

But the two Finns say they spotted what they believed might be weaknesses in Vingcard's code system as soon as they examined it in 2003, at a time when the system used mag-stripe technology rather than touch-less radio frequency or RFID. Vingcard's system encodes a unique cryptographic key into each keycard—and another into every hotel's master keys—that are all designed to be unguessable. But by reading the magnetically encoded key values of keycards that had been used in the system and looking for patterns in those numbers, they began to narrow down the possible "key space" of possible codes.

Even so, the number of possible master key codes remained far too large to enable a practical break-in, requiring thousands upon thousands of tries."Even with those implementation mistakes, it looked like the key space would be too big," says Hirvonen. But he and Tuominen continued to puzzle over the system on-and-off for years, even after Vingcard switched its Vision locks to RFID, analyzing keycards they collected and reverse-engineering a copy of the Vingcard front-desk software they'd obtained.

Beyond creating a master key to open any door in a hotel, they could also spoof specific 'floor' and 'section' keys.

Finally, they say, they were tipped off to one final method of narrowing down the possible master key codes in Vingcard Vision locks by a clue on the company's Assa Abloy University website for training hotel staff. Though they won't elaborate further, the researchers note that the trick somehow involves a correlation between the location of a door in a hotel and its RFID enciphered code. The system means that beyond creating a master key to open any door in a hotel, they could also spoof specific "floor" and "section" keys that open only a subset of doors in a building—all the better to impersonate the sort of less-powerful keys that hotel housekeeping staff hold, for instance.

The F-Secure researchers admit they don't know if their Vinguard attack has occurred in the real world. But the American firm LSI, which trains law enforcement agencies in bypassing locks, advertises Vingcard's products among those it promises to teach students to unlock. And the F-Secure researchers point to a 2010 assassination of a Palestinian Hamas official in a Dubai hotel, widely believed to have been carried out by the Israeli intelligence agency Mossad. The assassins in that case seemingly used a vulnerability in Vingcard locks to enter their target's room, albeit one that required re-programming the lock. "Most probably Mossad has a capability to do something like this," Tuominen says.

Given that Tuominen and Hirvonen have since worked with Assa Abloy to help fix that vulnerability, the real-world risk of those RFID-enabled intrusions may be smaller than ever. But for the coming months, as hotels get the message to upgrade their software, it never hurts to flip the door bolt, too.

Key Masters

#####EOF##### Press Center | WIRED
Skip Article Header. Skip to: Start of Article.

Press Center

#####EOF##### Apple Event Liveblog: All the News, as It Happens | WIRED
Skip Article Header. Skip to: Start of Article.

Apple Event Liveblog: All the News, as It Happens

#####EOF##### Australia's Encryption-Busting Law Could Impact Global Privacy | WIRED
Australia's Encryption-Busting Law Could Impact Global Privacy

Australia's Encryption-Busting Law Could Impact Global Privacy

Getty Images

Australia's Encryption-Busting Law Could Impact Global Privacy

Getty Images

Australia's parliament passed controversial legislation on Thursday that will allow the country's intelligence and law enforcement agencies to demand access to end-to-end encrypted digital communications. This means that Australian authorities will be able to compel tech companies like Facebook and Apple to make backdoors in their secure messaging platforms, including WhatsApp and iMessage. Cryptographers and privacy advocates—who have long been staunch opponents of encryption backdoors on public safety and human rights grounds—warn that the legislation poses serious risks, and will have real consequences that reverberate far beyond the land down under.

For months, the bill has faced criticism that it is overly broad, vaguely worded, and potentially dangerous. The tech industry, after all, is global; if Australia compels a company to weaken its product security for law enforcement, that backdoor will exist universally, vulnerable to exploitation by criminals and governments far beyond Australia. Additionally, if a company makes an access tool for Australian law enforcement, other countries will inevitably demand the same capability.

"The Australian legislation is particularly broad and vague, and would serve as an extremely poor model."

Greg Nojeim, CDT

The new law also allows officials to approach specific individuals—such as key employees within a company—with these demands, rather than the institution itself. In practice, they can force the engineer or IT administrator in charge of vetting and pushing out a product's updates to undermine its security. In some situations, the government could even compel the individual or a small group of people to carry this out in secret. Under the Australian law, companies that fail or refuse to comply with these orders will face fines up to about $7.3 million. Individuals who resist could face prison time.

Australian lawmakers nonetheless lauded the bill, saying it will enable crucial capabilities in organized crime and anti-terrorism investigations. Even the bill's opponents within parliament, who had initially called for significant amendments to the draft, eventually relented on Thursday.

“We will pass the legislation, inadequate as it is, so we can give our security agencies some of the tools they say they need,” Bill Shorten, the opposition Labor party leader, told reporters.

Global Impact

Though Australia will become the testing ground, technologists and privacy advocates warn that the law will swiftly impact global policy. All of Australia's intelligence allies—the United States, the United Kingdom, Canada, and New Zealand, known collectively as the Five Eyes—have spent decades lobbying for these mechanisms.

"The debate about simplifying lawful access to encrypted communication carries a considerable risk of regulations spilling to other countries," says Lukasz Olejnik, a security and privacy researcher and member of the W3C Technical Architecture Group. "Once the capabilities exist, there will be many parties interested in similar access. It would spread."

Just last week, US deputy attorney general Rod Rosenstein advocated what he called "responsible encryption" at a Washington, DC symposium. And the UK already passed the Investigatory Powers Act at the end of 2016—often called the Snoopers' Charter—that attempts to set up a framework for compelling companies to give investigators access to users' encrypted communications. So far, the UK law has been dogged by judicial challenges, and it doesn't allow government requests to be made of individuals like Australia will. But efforts to develop a legal framework for such surveillance requests continue to proliferate.

Privacy advocates note that the Five Eyes have increasingly used euphemisms like "responsible encryption," implying some sort of balance. For example, Australia's new law has a section called "Limitations," which says, "Designated communications provider must not be requested or required to implement or build a systemic weakness or systemic vulnerability."

"It’s just shocking to see this happen."

Danny O'Brien, EFF

Which sounds promising in theory. But the definition indicates some double speak. "Systemic vulnerability means a vulnerability that affects a whole class of technology, but does not include a vulnerability that is selectively introduced to one or more target technologies that are connected with a particular person," the Australian law says. In other words, intentionally weakening every messaging platform out there with the same backdoor wouldn't fly, but developing tailored access to individual messaging programs, like WhatsApp or iMessage, is allowed.

Increasingly, intelligence and law enforcement seem to want tech companies to be able to silently loop government officials into a suspect's encrypted communications. For example, an iMessage conversation that you think is just between you and your friend might actually be a group chat that includes an investigator who was invisibly added. The messages would all still be end-to-end encrypted, just between the three of you, instead of the two of you.

Cryptographers and privacy advocates are quick to note, though, that as with any such mechanism, criminals and other adversaries would figure out how to exploit it as well, creating an even larger public safety issue—and potentially endangering the operations of the entity that requested the workaround in the first place.

"They say, ‘we agree that we’re not going to put in backdoors or undermine encryption, but we do reserve the right to compel companies to assist us in getting all the data,'" says Danny O'Brien, international director of the Electronic Frontier Foundation. "And everyone in the technical community is somewhat confused by this, because there really isn’t a great deal of space between compelling people to give up plaintext and creating a backdoor. That’s just the definition of a backdoor."

Cryptographers have spent decades articulating a fundamental objection to backdoors, including in the seminal 2015 paper "Keys Under Doormats". But the recent rise in legislation like Australia's has prompted a fresh wave of rebuttals. For example, IEEE, the international professional engineering association, said unequivocally in a June position statement that, "Exceptional access mechanisms would create risks...Efforts to constrain strong encryption or introduce key escrow schemes into consumer products can have long-term negative effects on the privacy, security and civil liberties of the citizens so regulated."

Privacy advocates say that Australia's new law has other problems, too, especially in its vagueness about when and how often investigators can make data requests. This could lead to overreach, they say, especially since the law also restricts what companies can disclose about the number of requests they've received in some situations.

"One country's demands of a global provider or a global device maker can impact their operations on a global scale," says Greg Nojeim, director of the Freedom, Security and Technology Project at the Center for Democracy & Technology. "And there is a risk that other countries will enact similar legislation to compel companies to build in backdoors into encryption. The Australian legislation is particularly broad and vague, and would serve as an extremely poor model."

The Other Shoe

For people on both sides of the debate, the question now is how laws like Australia's will function in practice, and whether tech companies will comply with encryption-weakening orders or resist. For its part, Apple wrote statements objecting to both the UK's Investigatory Powers Act and Australia's new legislation before they were passed. And the company went to the mat about the issue in the US as well, when it refused to build a tool to help the FBI access one of the San Bernardino shooters' iPhones in 2015.

It is not clear that companies will be able to effectively resist as more laws emerge, though, particularly if Australia has success targeting individuals. Australian Parliament will consider amendments to the law next year, but privacy advocates and technologists say the situation so far is worrying. "It’s just shocking to see this happen in Australia," EFF's O'Brien says. "The other shoe is dropping."

Fines and especially prison time are already draconian punishments for failing or refusing to essentially break the security of a digital product. But the even deeper danger of Australia's new law, and the broader movement to enact backdoor-friendly legislation, is the logical extreme in which countries simply block access to technology that offers robust privacy and security protections to users. Authoritarian states like China, Russia, and Iran already do this. Now the Five Eyes are closer to it than ever.


More Great WIRED Stories

#####EOF##### The Most Important Startup's Hardest Worker Isn't a Person | WIRED
The Most Important Startup's Hardest Worker Isn't a Person

The Most Important Startup's Hardest Worker Isn't a Person

Ariel Zambelich/WIRED

The Most Important Startup's Hardest Worker Isn't a Person

Ariel Zambelich/WIRED

When you walk into the San Francisco headquarters of GitHub—the startup that sits at the heart of the software universe—it looks as if you've walked into the White House. The lobby is a wonderfully amusing recreation of the Oval Office, right down to the striped wallpaper, the gold curtains, and the American flag in the corner. The reception desk is, yes, a replica of the President's desk. But as you approach and check in for an afternoon meeting, the decor isn't nearly as interesting as the technology. As you check in, Hubot sends notifications to everyone you're scheduled to meet with.

This is a simple thing. When you sign into the iPad sitting on the President's desk, Hubot runs a software script that shuttles those notifications through the company's online chat system. But that's only a small part of what Hubot can do. From the same chat program, GitHubbers can ask Hubot which decidedly hip San Francisco food trucks are set up down the street—and Hubot will tell them. If they need a dial-in number for an afternoon conference call, Hubot can provide it. If they need something translated from Spanish, Hubot will translate. When prompted, Hubot can also post a tweet, unveil a graph of the latest GitHub.com traffic numbers, or boot up some servers to accommodate more traffic. Hubot can even tell a joke or find an animated GIF of something completely frivolous, like a dance party. In other words, Hubot is good for a pick me up.

'It's a new way of working.'

Sam Lambert, GitHub

Sam Lambert, the director of systems at GitHub, calls Hubot "the hardest working GitHubber." That's a company-wide in-joke. Hubot isn't really a GitHubber. He's a bit of software that plugs into the GitHub chat system. About five years ago, a guy named Ryan Tomayko built Hubot as an easier way for the company's engineers to manage and modify all the hardware and software underpinning GitHub.com. Simply by sending a message to Hubot—much as they'd send a message to anyone else from inside the GitHub chat client—engineers could update the operating systems driving GitHub's servers, delete data from the databases, or take entire servers offline. But in the years since, Hubot has evolved into something that supports everybody inside the company—not only handling a wide range of tasks but providing a conversational context for those tasks. And as time goes on, this becomes a central record of (just about) everything that happens inside the company.

"It's a new way of working," Lambert says.

Hubot points to a future where all businesses operate in this more automated way. Across Silicon Valley and beyond, many companies have adopted chat systems like the one GitHub built for itself. They include tools like Slack and Hipchat—software that provides a central place for a company's employees to communicate in real time—and businesses can equip these tools with bots that will do their bidding. Some of these bots are frivolous, but not all of them. In fact, GitHub has open-sourced the code for Hubot, making it freely available to all, and now it works with both Slack and Hipchat, not to mention classic IRC systems and Google's Messenger. Other businesses, in turn, have adopted Hubot as a way of handling critical tasks. This includes Box.com, the big-name Silicon Valley file-sharing startup that recently went public.

The rise of Hubot is a nice way of illustrating the rise of GitHub itself. GitHub.com is a place where software developers can share and collaborate on code, and it has become the primary repository for the world's open source software, embraced by everyone from Google to Facebook to Microsoft. Hubot is one of the many software projects GitHub has shared on its own service, and the spread of this bot mirrors the open source movement as a whole.

Hubot was designed so that anyone inside GitHub could use the Javascript programming language (or similar languages) to write new scripts for this automated system. If someone wanted Hubot to automatically determine what food trucks were setup down the street, they could write a script for that very task, programming the bot to scrape the latest information from the web. If they wanted Hubot to translate from one language to another, they could write a script that tapped into the Google Translate API—an online service that provides such translation. And now that Hubot is open source, anyone beyond GitHub can write such scripts and share them. As Slack and Hipchat continue to gain popularity, odds are that Hubot's reach will extend as well. Such is the way of open source.

Github

'It's the Culture of the Company'

When people discuss Hubot outside of GitHub, they often describe it as a tool that does "ChatOps," meaning it handles "operations" tasks—stuff like configuring new servers and databases or updating the code that drives GitHub.com. The term was coined by GitHub, and others have built additional ChatOps bots, including tools like Lita and Err.

The ChatOps idea grew out of a movement called "DevOps," where new-age tools like Chef and Puppet allow IT types to automatically configure and update massive amounts of hardware and software running across their organizations. ChatOps adds a conversational element to this movement. Hubot provides a new, easier, and more powerful way for GitHub to manage and modify and expand the technology underpinning its operation. "GitHub, the website, is updated, all day long, with a bot," Lambert says.

This is also how Box uses Hubot. And because so many other companies (Box's customers) use its infrastructure to house private data, the startup recently beefed up the bot with added security, so it can be sure that the person making a change to some critical system is the person authorized to make that change. This work has been open sourced too, so that anyone else can use it.

'You can use Hubot to do anything that you can write code for.'

Josh Nichols, GitHub

But the possibilities extend well beyond classic server room tasks, as GitHub has shown. "It's the culture of the company," Lambert says of Hubot. Hubot can tell him which GitHub employees are currently sitting in offices nearby. When he's about to place a call to a GitHubber on the other side of the globe, Hubot can tell him what time zone they're in. Or, if you're on the GitHub sales team, Hubot can turn up info about a company you're trying to sell to. If you're a coder, Hubot will tell you when a change has been made to particular piece of GitHub software. If you're in the finance department, it can show you the company's latest revenue figures.

"You can use Hubot to do anything that you can write code for," says Josh Nichols, the GitHubber who now oversees the Hubot project. The Hubot homepage refers to the tech as "A customizable, life embetterment robot."

Katelyn Bryant, who works in PR at GitHub, uses it as a way of tweeting to the company's official Twitter account or, yes, finding a dance party GIF. "We use Blue Jeans for video conferencing," she explains. "I can say: 'Hubot, bluejeans me.' And it will send me a personal link so I can start a meeting."

Hubot of the Future

Well, what she really says is: "/blue jeans me." Hubot springs into action whenever it sees a "/." The system has a "command line" feel to it, meaning that, much like with old school computer terminals, you have to use specific commands for it to work. But at the same time, it feels conversational, in large part because you send commands to Hubot as you would send messages to any human. You open up a chat room and send Hubot a note, and this becomes part of a larger discussion. When it does your bidding, Hubot appears among the other chatter, represented by his own robot-like icon.

"You get this incredible context for what you're doing and what your team is doing," Lambert says. "When something goes wrong, everyone piles into the infrastructure [chat] room, and you can watch the remediation of the incident happen. It's highly coordinated. People can understand the context. It helps you work through things as a team."

What's more, unlike a command line, Hubot and the GitHub chat client provide an easy-to-access log of all past messages. If Lambert wants to see what happened one day last week when the company updated a bunch of servers, he merely opens a chat log. "I can return to that point in time," he says.

A Hubot Dependency

As we sit in the Safari Room on the second floor of GitHub headquarters—the room with stuffed toy animal heads mounted on the wall—Lambert tries to remember the command for asking Hubot which food trucks are nearby. He can't. And that's a drawback.

But as machine learning techniques continue to progress, computers are getting better at understanding and responding to natural language—the way humans naturally speak. Google recently published research describing a chatbot that can discuss the meaning of life, and though flawed, it's pretty impressive. This is the bigger (and more distant) future of Hubot. It becomes all the more useful if you can ask it, in ordinary English, about the food trucks.

That is still to come. But Hubot is enormously powerful today. GitHub is five years into using it, and it's deeply ingrained in the company's culture. If you walk around the GitHub offices, you'll see Hubot stickers on laptops. Company artists have developed a cartoon-ish alter-ego for the bot that looks kinda like a steel-plated flying Minion—without the goofy factor (see the image above, pulled from the Hubot homepage). This 'toon turns up in the video that opened the company's recent developer conference, alongside that other GitHub mascot, the Octocat (see video above).

Lambert says that Hubot is so entwined with the way he and the company works, he can't imagine working at a company that doesn't use it. GitHub is a particularly ripe environment for this kind of thing. It's staffed by coders predisposed to writing scripts that tap all sorts of online APIs and perform all sorts of tasks. But we're moving toward a world where more and more people are comfortable with such coding. Javascript is a simple language. That food-truck script was written by someone who works in GitHub's marketing department—someone who's not a coder by trade.

At the same time, tools like Slack are evolving into systems that behave much like Hubot by default. Slack can seamlessly integrate with myriad outside services, including everything from Blue Jeans video conferencing to the Google Drive file storage service to various software development tools. And it can automate the way you interact with these services, handling many of the same tasks without asking you or your company to write Javascript. "They've made the chat client your office's OS," says Keith Axline, a developer with Northwestern company called FinSight (and a former WIRED employee). "You can do a lot of the same things, not necessarily through a bot but through Slack itself."

At GitHub, they like that Hubot is personified. They like asking Hubot questions. They like, well, writing Javascript. But we may see the larger market move towards a simpler, but equally effective, paradigm. However things evolve, Sam Lambert may soon find that if he wants to leave GitHub, there are many companies that work the same way. Very many.

#####EOF##### Culture | WIRED

Latest Videos

More Videos
Culture

How This Guy Became a World Yo-Yo Champion

In this episode of Obsessed, Gentry Stein shows off the wild tricks that helped him become a world yo-yo champion.

Most Recent

Read More

Staff

Angela Watercutter
Jason Kehe
Peter Rubin
Jason Parham
Emma Grey Ellis
David Barr Kirtley
Graeme McMillan
Julie Muncy
#####EOF##### U.S. Gov Insists It Doesn't Stockpile Zero-Day Exploits to Hack Enemies | WIRED
U.S. Gov Insists It Doesn't Stockpile Zero-Day Exploits to Hack Enemies

U.S. Gov Insists It Doesn't Stockpile Zero-Day Exploits to Hack Enemies

White House Cybersecurity Policy Coordinator Michael Daniel listens to questions during the Reuters Cybersecurity Summit in Washington, May 14, 2013.
Jonathan Ernst/Reuters/Corbis

U.S. Gov Insists It Doesn't Stockpile Zero-Day Exploits to Hack Enemies

White House Cybersecurity Policy Coordinator Michael Daniel listens to questions during the Reuters Cybersecurity Summit in Washington, May 14, 2013.
Jonathan Ernst/Reuters/Corbis

For years the government has refused to talk about or even acknowledge its secret use of zero-day software vulnerabilities to hack into the computers of adversaries and criminal suspects. This year, however, the Obama administration finally acknowledged in a roundabout way what everyone already knew—that the National Security Agency and law enforcement agencies sometimes keep information about software vulnerabilities secret so the government can exploit them for purposes of surveillance and sabotage.

Government sources told the New York Times last spring that any time the NSA discovers a major flaw in software it has to disclose the vulnerability to the vendor and others so that the security hole can be patched. But they also said that if the hole has “a clear national security or law enforcement” use, the government can choose to keep information about the vulnerability secret in order to exploit it. This begged the question about just how many vulnerabilities the government has withheld over the years to exploit.

In a new interview about the government's zero-day policy, Michael Daniel, National Security Council cybersecurity coordinator and special adviser to the president on cybersecurity issues, insists to WIRED that the government doesn't stockpile large numbers of zero days for use.

"[T]here's often this image that the government has spent a lot of time and effort to discover vulnerabilities that we've stockpiled in huge numbers ... The reality is just not nearly as stark or as interesting as that," he says.

Zero-day vulnerabilities are software security holes that are not known to the software vendor and are therefore unpatched and open to attack by hackers and others. A zero-day exploit is the malicious code crafted to attack such a hole to gain entry to a computer. When security researchers uncover zero-day vulnerabilities, they generally disclose them to the vendor so they can be patched. But when the government wants to exploit a hole, it withholds the information, leaving all computers that contain the flaw open to attack—including U.S. government computers, critical infrastructure systems and the computers of average users.

Daniel says the government's retention of zero-days for exploitation is the exception, not the rule, and that the policy for disclosing zero-day vulnerabilities by default—aside from special-use cases—is not new but has been in place since 2010. He won't say how many zero-days the government has disclosed in the four years since the policy went into effect or how many it may have been withholding and exploiting before the policy was established. But during an appearance at Stanford University earlier this month, Admiral Mike Rogers, who replaced retiring Gen. Keith Alexander as the NSA's new director last spring, said that "by orders of magnitude, the greatest numbers of vulnerabilities we find, we share."

That statement, however, appears to contradict what a government-appointed review board said last year. So WIRED spoke with Daniel in an effort to get some clarity on this and other questions about the government's zero-day policy.

Timeline of Policy in Question

Last December, the President’s Review Group on Intelligence and Communications Technologies seemed to suggest the government had no policy in place for disclosing zero days when it recommended in a public report that only in rare instances should the U.S. government authorize the use of zero-day exploits, and then only for “high priority intelligence collection.” The review board, convened by President Obama in the wake of Edward Snowden's revelations about the NSA's surveillance activities, produced its lengthy report (.pdf) to provide recommendations for reforming the intelligence community's activities. The report made a number of recommendations on various topics, but the one addressing zero-days was notable because it was the first time the government's use of exploits was acknowledged in such a forum.

The review board asserted that "in almost all instances, for widely used code, it is in the national interest to eliminate software vulnerabilities rather than to use them for US intelligence collection." The board also said that decisions about withholding a vulnerability for purposes of exploitation should only be made “following senior, interagency review involving all appropriate departments.” And when the government does decide to withhold information about a zero-day hole to exploit it, that decision should have an expiration date.

Obama appeared to ignore the board's recommendations when, a month later, he announced a list of NSA reforms that contained no mention of zero days or the government's policy about using them. It wasn't until the Heartbleed vulnerability was discovered in April, and a news report falsely claimed the NSA had known about the flaw and kept silent about it to exploit it, that the administration finally went public with a formal statement about its zero-day policy. In addition to comments given to the Times announcing the default disclosure policy, Daniel published a blog post stating that the White House had also "re-invigorated" its process for implementing this "existing policy."

The statements, however, raised more questions than they answered. Was this a new policy or had the government been disclosing vulnerabilities prior to this announcement? What did "reinvigorated" mean? And did the policy apply equally to zero-day vulnerabilities that the government purchased from contractors or just ones that the NSA itself discovered?

Daniel says although the default-disclosure policy was established in 2010 it "had not been implemented to the full degree that it should have been," hence the government's use of the term "reinvigorated" to describe this new phase. The relevant agencies, he says, "had not been doing sufficient interagency communications and ensuring that everybody had the right level of visibility across the entire government" about vulnerabilities that were discovered.

What this means is that although "they probably were disclosing the vulnerability" by default, they "may not have been communicating that to all the relevant agencies as regular as they should have been." Agencies, he says, might have been communicating "at the subject-matter expert level," but the communication may not have been happening as consistently, in as coordinated a fashion or within the timelines that the policy dictated. This was the part, he says, that was "reinvigorated" this year "to make sure it was actually happening consistently and as thoroughly as the policy called for."

Daniel didn't say exactly when in 2010 the policy was initiated or what prompted it, but 2010 is the year the Stuxnet virus/worm was discovered infecting machines in Iran. Stuxnet was a digital weapon reportedly designed by the U.S. and Israel to sabotage centrifuges enriching uranium for Iran's nuclear program. It used five zero-day exploits to spread, one of which was a fundamental vulnerability in the Windows operating system that affected millions of machines around the world, yet information about the vulnerability was kept secret so the U.S. and Israel could use it to spread Stuxnet on machines in Iran.

Asked why, if the policy had been in place since 2010 the review board didn't seem to know about it when they made their recommendations last December, Daniel says he didn't know. So WIRED contacted Peter Swire, a member of the review board and a professor of law and ethics at the Georgia institute of Technology, to clarify if the group had been briefed on the existing zero-day policy before they wrote their report. Swire says they had, but parsed his words carefully as he explained that the group's recommendations to the president stemmed from the fact that the policy wasn't being implemented as the board thought it should, noting that certain presumptions about the existing policy needed to be clarified and strengthened.

"A presumption might mean you take action 55 percent of the time [to disclose a vulnerability] or a presumption might mean we do it 99 percent of the time," Swire says. "A 99 percent presumption is a much stronger presumption; it means exceptions are much less frequent.... Our recommendation was to have significantly fewer exceptions."

The group also recommended, he says, a shift in the "equities" process—the process used to determine when a vulnerability is withheld and when it is disclosed—from the NSA to the White House, implying that until this year the NSA or the intelligence community had been the sole arbiter of decisions about the use of zero-day vulnerabilities and exploits. The review board had recommended that the National Security Council have an oversight role in this process, and Daniel confirmed to WIRED that his office now oversees the process. So it appears that although Obama didn't publicly acknowledge the review board's recommendations when he announced his reforms of the NSA last January, he did in fact implement their two recommendations about the government's handling of zero days—by strengthening the default presumption for disclosing zero days and giving someone other than the NSA authority over deciding when to disclose or withhold zero days.

On how the Interagency Equities Process Works

Daniel wouldn't go into detail about the equities process or who is involved in it other than to say "the agencies that you would expect" use a "multi-factor test" to examine vulnerabilities to determine how extensively the software is used in critical infrastructure and US government systems, and how likely it is that malicious actors have already got ahold of it or may get hold of it. "All of those questions that are laid out, we require that analysis and discuss each one of those points. Then groups of subject-matter experts across the government make a recommendation to this interagency group that I chair here on the National Security Council." The subject-matter experts provide "their best judgment about [a vulnerability's] widespreadness or how likely it is that researchers are going to be able to discover it or how unlikely it is that a foreign adversary has it."

He reiterated that the government's default position would be to disclose but that there "are a limited set of vulnerabilities that we may need to retain for a period of time in order to conduct legitimate national security intelligence and law enforcement missions."

He wouldn't say what the period of time would be for withholding information about vulnerabilities to exploit them before disclosing them but says it "is not one that lasts in perpetuity. In fact the policy actually says that we must regularly review a decision to retain a vulnerability and make sure that all the factors that I mentioned before still hold." That review, he says, happens several times a year. "So the situation may change and we may decide at that point that it's time to actually disclose the vulnerability," he notes.

On Stockpiling Vulnerabilities

Daniel would not say how many vulnerabilities the government has disclosed or retained so far, but he denied that it maintains a vast repository of zero days.

"What we can say is that the overwhelming majority of those that we find we do disclose," he notes, echoing the words Rogers had used. "The idea that we have these vast stockpiles of vulnerabilities stored up—you know, Raider's of the Lost Ark style—is just not accurate. So the default position really is that we disclose most of the vulnerabilities that we find to the vendors. We just don't take credit for it for a variety of reasons and have no desire to take credit for it."

Asked if the disclosure policy also applies to zero-day vulnerabilities and exploits the government purchases from contractors and independent sellers, Daniel says it does.

"It's difficult for me to talk about where we might find the vulnerabilities or the source of the vulnerabilities that the US government comes across because of course a lot of that is classified," he says. "[B]ut the policy remains that our default position is going to be and our strong bias is going to be that we will disclose vulnerabilities to vendors. If you picked an economy that was digitally dependent, the United States is certainly at the top of the list, right? So it's highly likely that we are going to face a situation where a vulnerability would be something that we would be concerned about from a network defense standpoint. So it shouldn't be surprising that our bias is going to be towards disclosing it."

How exactly this would work, however, is unclear. The government doesn't necessarily own the information and code it purchases from vendors. Not every exploit sold is purchased under an exclusivity agreement. Sellers may also ask the government to sign an NDA related to a sale.

Daniel replied that it made perfect sense to purchase some vulnerabilities to disclose if, for example, the government learned that someone was peddling a vulnerability that affected a lot of critical infrastructure networks and the government wanted to take it off the market and get it fixed. "I'm not saying that would be the primary method or even the most desirable method, but it is certainly one that you could contemplate the US government pursuing if we thought the vulnerability was significant enough for us to try to get it patched," he says.

But it's unclear how the default disclosure process applies when the government is also purchasing vulnerabilities from vendors specifically to exploit them. What would be the point of spending U.S. tax dollars on a vulnerability only to burn it by disclosing it? Daniel sidestepped the question, saying, "[T]here's often this image that the government has spent a lot of time and effort to discover vulnerabilities that we've stockpiled in huge numbers and similarly that we would be purchasing very, very large numbers of vulnerabilities on the open mark, the gray market, the black market, whatever you want to call it. And I think the reality is ... that the numbers are just not anywhere near what people believe they are…."

#####EOF##### We Need More Videogame Folklorists | WIRED
Skip Article Header. Skip to: Start of Article.

We Need More Videogame Folklorists

Skip Comments. Skip to: Footer. View comments
#####EOF##### Business | WIRED

Latest Videos

More Videos
Business

How Apps like Messenger are Changing the Future of Marketing | WIRED Brand Lab

BRANDED CONTENT | Produced by WIRED Brand Lab with Messenger Business | The new frontier of Marketing is here. Wired Contributor, Jakob Schiller sits down with Eric Toda, Head of Marketing for Hill City, to discuss how apps like Messenger are changing the landscape of marketing to become more 1:1.

#####EOF##### SITEMAP | WIRED
Skip Article Header. Skip to: Start of Article.

SITEMAP

#####EOF##### The FBI Needs Hackers, Not Backdoors | WIRED
The FBI Needs Hackers, Not Backdoors

The FBI Needs Hackers, Not Backdoors

Photo: dustball / Flickr

The FBI Needs Hackers, Not Backdoors

Photo: dustball / Flickr

Just imagine if all the applications and services you saw or heard about at CES last week had to be designed to be "wiretap ready" before they could be offered on the market. Before regular people like you or me could use them.

Yet that’s a real possibility. For the last few years, the FBI’s been warning that its surveillance capabilities are "going dark," because internet communications technologies – including devices that connect to the internet – are getting too difficult to intercept with current law enforcement tools. So the FBI wants a more wiretap-friendly internet, and legislation to mandate it will likely be proposed this year.

But a better way to protect privacy and security on the internet may be for the FBI to get better at breaking into computers.

Whoa, what? Let us explain.

Whether we like them or not, wiretaps – legally authorized ones only, of course – are an important law enforcement tool. But mandatory wiretap backdoors in internet services would invite at least as much new crime as it could help solve.

Especially because we’re knee deep in what can only be called a cybersecurity crisis. Criminals, rival nation states, and rogue hackers routinely seek out and exploit vulnerabilities in our computers and networks – much faster than we can fix them. In this cybersecurity landscape, wiretapping interfaces are particularly juicy targets.

Every connection, every interface increases our exposure and makes criminals' jobs easier.

Matt Blaze & Susan Landau

About

Matt Blaze & Susan Landau

Matt Blaze directs the Distributed Systems Lab at the University of Pennsylvania, where he studies cryptography and secure systems. Prior to joining Penn, he was a distinguished member of technical staff at AT&T Bell Labs. He can be found on Twitter at mattblaze. Susan Landau is currently a Guggenheim Scholar. She was a distinguished engineer at Sun Microsystems. Landau is the author of Surveillance or Security? The Risks Posed by New Wiretapping Technologies.

We've Been Here Before

Two decades ago, the FBI complained it was having trouble tapping the then-latest cellphones and digital telephone switches. After extensive FBI lobbying, Congress passed the Communications Assistance for Law Enforcement Act (CALEA) in 1994, mandating that all telephone switches include FBI-approved wiretapping capabilities.

CALEA was justifiably controversial, not least because its requirement for "backdoors" across our communications infrastructure seemed like a security nightmare: How could we keep criminals and foreign spies from exploiting weaknesses in the new wiretapping features? Would we even be able to detect them when they did?

Those fears were soon borne out. In 2004, a mysterious someone – the case was never solved – hacked the wiretap backdoors of a Greek cellular switch to listen in on senior government officials ... including the prime minister.

Think this could only happen abroad? Some years ago, the U.S. National Security Agency discovered that every telephone switch for sale to the Department of Defense had security vulnerabilities in their mandated wiretap implementations. Every. Single. One.

Given these risks, you might think now's a good time to scale back CALEA and harden our communications infrastructure against attack.

But the FBI wants to do the opposite. They want to massively expand the wiretap mandate beyond phone services to internet-based services: instant messaging systems, video conferencing, e-mail, smartphone apps, and so on.

Yet on the internet, the threats – and consequences of compromise – are even more serious than with telephone switches. Not only would wiretap mandates put a damper on innovation, but the FBI is effectively choosing making it easier to solve some crimes by opening the door to other crimes.

Are these really the only options we have? No.

The FBI wants to massively expand the wiretap mandate beyond phone services to internet-based services.

Bugs Are Backdoors, Too

If it turns out that important surveillance sources really are going dark – and that’s a big if (it's not only on TV that modern tech already makes it easier to surveil suspects) – there's no need to mandate wiretap backdoors.

That's because there’s already an alternative in place: buggy, vulnerable software.

The same vulnerabilities that enable crime in the first place also give law enforcement a way to wiretap – when they have a narrowly targeted warrant and can't get what they're after some other way. The very reasons why we have Patch Tuesday followed by Exploit Wednesday, why opening e-mail attachments feels like Russian roulette, and why anti-virus software and firewalls aren’t enough to keep us safe online provide the very backdoors the FBI wants.

Since the beginning of software time, every technology device – and especially ones that use the internet – has and continues to have vulnerabilities. The sad truth is that as hard as we may try, as often as we patch what we can patch, no one knows how to build secure software for the real world.

Instead of building special (and more vulnerable) new wiretapping interfaces, law enforcement can tap their targets' devices and apps directly by exploiting existing vulnerabilities. Instead of changing the law, they can use specialized, narrowly targeted exploit tools to do the tapping.

In fact, targeted FBI computer exploits are nothing new. When the FBI placed a "keylogger" on suspected bookmaker Nicky Scarfo Jr.'s computer in 2000, it allowed the government to win a conviction from decrypting his files after gaining access to his PGP password. A few years later, the FBI developed "CIPAV," a piece of software that enables investigators to download such spying tools electronically.

The sad truth is that no one knows how to build secure software for the real world.

Exploits aren't a magic wiretapping bullet. There's engineering effort involved in finding vulnerabilities and building exploit tools, and that costs money.

And when the FBI finds a vulnerability in a major piece of software, shouldn’t they let the manufacturer know so innocent users can patch? Should the government buy exploit tools on the underground market or build them themselves? These are difficult questions, but they’re not fundamentally different from those we grapple with for dealing with informants, weapons, and other potentially dangerous law enforcement tools.

But at least targeted exploit tools are harder to abuse on a large scale than globally mandated backdoors in every switch, every router, every application, every device.

While the thought of the FBI exploiting vulnerabilities to conduct authorized wiretaps makes us a bit queasy, at least that approach leaves the infrastructure, and everyone else's devices, alone.

Ultimately, not much is gained – but too much is lost – by mandating special "lawful intercept" interfaces in internet systems. There’s no need to talk about adding deliberate backdoors until we figure out how to get rid of the unintentional ones ... and that won’t be for a long, long time.

Editor: Sonal Chokshi @smc90

#####EOF##### Fraudsters Target Facebook With Phishing Scam | WIRED
Fraudsters Target Facebook With Phishing Scam

Fraudsters Target Facebook With Phishing Scam

Fraudsters Target Facebook With Phishing Scam

Hackers for the first time are targeting the popular social networking site Facebook with a phishing scam that harvests users' login details and passwords.

Some Facebook users checking their accounts Wednesday found odd postings of messages on their "wall" from one of their friends, saying: "lol i can't believe these pics got posted.... it's going to be BADDDD when her boyfriend sees these," followed by what looks like a genuine Facebook link.

But the link leads to a fake Facebook login page hosted on a Chinese .cn domain. The fake page actually logs the victims into Facebook, but also keeps a copy of their user names and passwords.

Soon after, the hackers post messages containing the same URL on the public "walls" of the users' friends. The technique is a powerful phishing scam, because the link seems to be coming from a trusted friend.

"A lot of phishing is moving out of financial services and going to online web sites that have not installed stronger authentication, sites that are not as close to the money," said Marc Gaffan, who heads product marketing for security firm RSA's Identity and Access Assurance Group.

Thanks to the exploding popularity of social networking services – and tightened security at financial websites – fraudsters are targeting networking sites to make money in a number of ways, according to security experts.

Hackers can use the compromised profiles to host Trojan horses such as key loggers that go on to steal banking passwords and credit card numbers.

And since many people use the same logins and passwords on multiple sites, the hackers can also check if stolen Facebook credentials will log them into eBay or Amazon, for instance.

And super-sneaky crooks may be interested in mining profiles for personal information that can be used to send carefully targeted spam or malware. If someone is listed as an NFL fan, for example, hackers may send him phony NFL messages to trick him into clicking a link or installing attached malware.

Dancho Danchev, an independent security consultant, said the hackers may be trying to harvest hundreds of accounts before embedding malware that automatically infects everyone who visits the infected profiles.

"If they register a phisher.cn domain they would have to advertise it so people will come across and get infected, (but) if they get access to profiles where people will return for sure, they won't reinvent the wheel," he said. "Moreover, they do internal spamming for the usual pharmaceuticals and porn stuff automatically."

Danchev has been tracking scammers using similar Chinese .cn domains to target MySpace user accounts, he said. "The common stereotype that it's all about the money is true in this case, because they will either embed the malware, or sell the accounting data to someone else who would do it," he said.

Rob Jensen, a systems consultant, found the phishing link on his wall when he logged in to Facebook on Wednesday morning.

"A friend of mine just left a wall post, just a blank URL, and I clicked on the link and found it was a phishing site," Jensen said. "I saw the .cn domain, and being in tech I suspected it."

Jensen said he sent a message to his friend to ask her what was going on, but hadn't yet told her she had been compromised and that she should log in and change her password.

Though the phishing link mimics a typical Facebook profile link by replacing forward slashes with periods, Jensen said he put the URL in a search engine and then clicked on it in Firefox, which identified it as a phishing site.

The offending URL is h–p://www.facebook.com.profile.php.id.371233.cn/, making 371233.cn the rogue domain name. It was registered in China in November using an e-mail address that was also the contact address for some 224 other similiar domain names.

Banks and online brokerages have hardened their sites against phishing attacks using a number of techniques, ranging from requiring users to use a physical token that generates a new passcode every minute to checking what machine is logging in and requiring more information when a user attempts to log in from a different machine or geographic area.

Users who fall prey to phishing scams should log in and change their passwords immediately, and do the same to their e-mail and shopping accounts if they used the same password for those services.

Facebook did not respond to requests for comment by deadline.

#####EOF##### What Is Blockchain? The Complete WIRED Guide | WIRED
The WIRED Guide to the Blockchain
Illustrations by Radio

The WIRED Guide to the Blockchain

Depending on who you ask, blockchains are either the most important technological innovation since the internet or a solution looking for a problem.

The original blockchain is the decentralized ledger behind the digital currency bitcoin. The ledger consists of linked batches of transactions known as blocks (hence the term blockchain), and an identical copy is stored on each of the roughly 200,000 computers that make up the bitcoin network. Each change to the ledger is cryptographically signed to prove that the person transferring virtual coins is the actual owner of those coins. But no one can spend their coins twice, because once a transaction is recorded in the ledger, every node in the network will know about it.

Who paved the way for blockchains?

DigiCash (1989)

DigiCash was founded by David Chaum to create a digital-currency system that enabled users to make untraceable, anonymous transactions. It was perhaps too early for its time. It went bankrupt in 1998, just as ecommerce was finally taking off.

E-Gold (1996)

E-gold was a digital currency backed by real gold. The company was plagued by legal troubles, and its founder Douglas Jackson eventually pled guilty to operating an illegal money-transfer service and conspiracy to commit money laundering.

B-Money and Bit-Gold (1998)

Cryptographers Wei Dai (B-money) and Nick Szabo (Bit-gold) each proposed separate but similar decentralized currency systems with a limited supply of digital money issued to people who devoted computing resources.

Ripple Pay (2004)

Now a cryptocurrency, Ripple started out as a system for exchanging digital IOUs between trusted parties.

Reusable Proofs of Work (RPOW) (2004)

RPOW was a prototype of a system for issuing tokens that could be traded with others in exchange for computing intensive work. It was inspired in part by Bit-gold and created by bitcoin's second user, Hal Finney.

The idea is to both keep track of how each unit of the virtual currency is spent and prevent unauthorized changes to the ledger. The upshot: No bitcoin user has to trust anyone else, because no one can cheat the system.

Other digital currencies have imitated this basic idea, often trying to solve perceived problems with bitcoin by building new cryptocurrencies on new blockchains. But advocates have seized on the idea of a decentralized, cryptographically secure database for uses beyond currency. Its biggest boosters believe blockchains can not only replace central banks but usher in a new era of online services outside the control of internet giants like Facebook and Google. These new-age apps would be impossible to censor, advocates say, and would be more answerable to users.

Several companies are already taking advantage of the Ethereum platform, initially built for a virtual currency. The startup Storj offers a file-storage service, banking on the idea that distributing files across a decentralized network is safer than putting all your files in one cabinet.

Meanwhile, despite the fact that bitcoin was originally best known for enabling illicit drug sales over the internet, blockchains are finding acceptance in some of the world's largest companies. Amazon, Google, and Facebook are all exploring the technology. And perhaps most surprisingly, some big financial services companies, including JP Morgan and the Depository Trust & Clearing Corporation, are experimenting with blockchains and blockchain-like technologies to improve the efficiency of trading stocks and other assets. Traders buy and sell stocks rapidly, but the behind-the-scenes process of transferring ownership of those assets can take days. Some technologists believe blockchains could help with that.

There are also potential applications for blockchains in the seemingly boring world of corporate compliance. After all, storing records in an immutable ledger is a pretty good way to assure auditors that those records haven't been tampered with. This might be good for more than just catching embezzlers or tax cheats. Walmart, for example, is experimenting with using using the blockchain to track its supply chain, which could help it trace the source of food contaminates.

It's too early to say which experiments will work out or whether the results of successful experiments will resemble the bitcoin blockchain. But the idea of creating tamper-proof databases has captured the attention of everyone from anarchist techies to staid bankers.

The First Blockchain

The original bitcoin software was released to the public in January 2009. It was open source software, meaning anyone could examine the code and reuse it. And many have. At first, blockchain enthusiasts sought to simply improve on bitcoin. Litecoin, another virtual currency based on the bitcoin software, seeks to offer faster transactions.

One of the first projects to repurpose the bitcoin code to use it for more than currency was Namecoin, a system for registering ".bit" domain names. The traditional domain-name management system—the one that helps your computer find our website when you type wired.com—depends on a central database, essentially an address book for the internet. Internet-freedom activists have long worried that this traditional approach makes censorship too easy, because governments can seize a domain name by forcing the company responsible for registering it to change the central database. The US government has done this several times to shut sites accused of violating gambling or intellectual-property laws.

Namecoin tries to solve this problem by storing .bit domain registrations in a blockchain, which theoretically makes it impossible for anyone without the encryption key to change the registration information. To seize a .bit domain name, a government would have to find the person responsible for the site and force them to hand over the key.

What's an "ICO"?

Ethereum and other blockchain-based projects have raised funds through a controversial practice called an "initial coin offering," or ICO: The creators of new digital currencies sell a certain amount of the currency, usually before they’ve finished the software and technology that underpins it. The idea is that investors can get in early while giving developers the funds to finish the tech. The catch is that these offerings have traditionally operated outside the regulatory framework meant to protect investors, although that’s starting to change as more governments examine the practice.

Bitcoin’s software wasn’t designed to handle other types of applications. In 2013, a startup called Ethereum published a paper outlining an idea that promised to make it easier for coders to create their own blockchain-based software without having to start from scratch, without relying on the original bitcoin software. In 2015 the company released its platform for building “smart contracts,” software applications that can enforce an agreement without human intervention. For example, you could create a smart contract to bet on tomorrow’s weather. You and your gambling partner would upload the contract to the Ethereum network and then send a little digital currency, which the software would essentially hold in escrow. The next day, the software would check the weather and then send the winner their earnings. At least two major "prediction markets" have been built on the platform, enabling people to bet on more interesting outcomes, such as which political party will win an election.

So long as the software is written correctly, there's no need to trust anyone in these transactions. But that turns out to be a big catch. In 2016 a hacker made off with about $50 million worth of Ethereum's custom currency intended for a democratized investment scheme where investors would pool their money and vote on how to invest it. A coding error allowed a still unknown person to make off with the virtual cash. Lesson: It's hard to remove humans from transactions, with or without a blockchain.

Even as cryptography geeks plotted to use blockchains to topple, or at least bypass, big banks, the financial sector began its own experiments with blockchains. In 2015, some of the largest financial institutions in the world, including JP Morgan, the Bank of England, and the Depository Trust & Clearing Corporation (DTCC), announced that they would collaborate on open source blockchain software under the name Hyperledger. Several pieces of software have been released under the Hyperledger umbrella, including Sawtooth, created by Intel for building custom blockchains.

The industry is already experimenting with using blockchains to make security trades more efficient. Nasdaq OMX, the company behind the Nasdaq stock exchange, began allowing private companies to use blockchains to manage shares in 2015, starting with a company called Chain. Similarly, the Australian Securities Exchange announced a deal to use blockchain technology from a Goldman Sachs-backed startup called Digital Asset Holdings to power the post-trade processes of Australia’s equity market.

The Future of Blockchain

Despite the blockchain hype—and many experiments—there’s still no "killer app" for the technology beyond currency speculation. And while auditors and health inspectors might like the idea of immutable records, as a society we don't always want records to be permanent.

Blockchain proponents admit that it could take a while for the technology to catch on. After all, the internet's foundational technologies were created in the 1960s, but it took decades for the internet to become ubiquitous.

That said, the idea could eventually show up in lots of places. For example, your digital identity could be tied to a token on a blockchain. You could then use that token to log in to apps, open bank accounts, apply for jobs, or prove that your emails or social-media messages are really from you. That could be especially useful for refugees, who have lost their native proofs of identity or never had any to begin with.

Future social networks might be built on connected smart contracts that show your posts only to certain people or enable people who create popular content to be paid in cryptocurrencies. Perhaps the most radical idea is using blockchains to handle voting. The team behind the open source project Soverign built a platform that organizations, companies, and even governments can already use to gather votes on a blockchain.

Advocates believe blockchains can help automate many tasks now handled by lawyers or other professionals. For example, your will might be stored in a blockchain. Or perhaps your will could be a smart contract that will automatically dole out your money to your heirs. Or maybe blockchains will replace notaries.

It's also entirely possible that blockchains will evolve into something completely different. Many corporate experiments involve "private" blockchains that run on servers within a single company and selected partners. In contrast, anyone can run bitcoin or Ethereum software on their computer and view all of the transactions recorded on the networks’ respective blockchains. But big companies prefer to keep their data in the hands of a few employees, partners, and perhaps regulators.

Bitcoin proved that it’s possible to build an online service that operates outside the control of any one company or organization. The task for blockchain advocates now is proving that that’s actually a good thing.

Learn More

This guide was last updated on May 23, 2018.

Enjoyed this deep -dive? Check out more WIRED Guides.

#####EOF##### Air Gap Hacker Mordechai Guri Steals Data With Noise, Light, and Magnets | WIRED
Mind the Gap: This Researcher Steals Data With Noise, Light, and Magnets

Mind the Gap: This Researcher Steals Data With Noise, Light, and Magnets

HOTLITTLEPOTATO

The field of cybersecurity is obsessed with preventing and detecting breaches, finding every possible strategy to keep hackers from infiltrating your digital inner sanctum. But Mordechai Guri has spent the last four years fixated instead on exfiltration: How spies pull information out once they've gotten in. Specifically, he focuses on stealing secrets sensitive enough to be stored on an air-gapped computer, one that's disconnected from all networks and sometimes even shielded from radio waves. Which makes Guri something like an information escape artist.

More, perhaps, than any single researcher outside of a three-letter agency, Guri has uniquely fixated his career on defeating air gaps by using so-called "covert channels," stealthy methods of transmitting data in ways that most security models don't account for. As the director of the Cybersecurity Research Center at Israel's Ben Gurion University, 38-year-old Guri's team has invented one devious hack after another that takes advantage of the accidental and little-noticed emissions of a computer's components—everything from light to sound to heat.

Guri and his fellow Ben-Gurion researchers have shown, for instance, that it's possible to trick a fully offline computer into leaking data to another nearby device via the noise its internal fan generates, by changing air temperatures in patterns that the receiving computer can detect with thermal sensors, or even by blinking out a stream of information from a computer hard drive LED to the camera on a quadcopter drone hovering outside a nearby window. In new research published today, the Ben-Gurion team has even shown that they can pull data off a computer protected by not only an air gap, but also a Faraday cage designed to block all radio signals.

An Exfiltration Game

"Everyone was talking about breaking the air gap to get in, but no one was talking about getting the information out," Guri says of his initial covert channel work, which he started at Ben-Gurion in 2014 as a PhD student. "That opened the gate to all this research, to break the paradigm that there's a hermetic seal around air-gapped networks."

Guri's research, in fact, has focused almost exclusively on siphoning data out of those supposedly sealed environments. His work also typically makes the unorthodox assumption that an air-gapped target has already been infected with malware by, say, a USB drive, or other temporary connection used to occasionally update software on the air-gapped computer or feed it new data. Which isn't necessarily too far a leap to make; that is, after all, how highly targeted malware like the NSA's Stuxnet and Flame penetrated air-gapped Iranian computers a decade ago, and how Russia's "agent.btz" malware infected classified Pentagon networks around the same time.

Mordechai Guri

Guri's work aims to show that once that infection has happened, hackers don't necessarily need to wait for another traditional connection to exfiltrate stolen data. Instead, they can use more insidious means to leak information to nearby computers—often to malware on a nearby smartphone, or another infected computer on the other side of the air gap.

Guri's team has "made a tour de force of demonstrating the myriad ways that malicious code deployed in a computer can manipulate physical environments to exfiltrate secrets," says Eran Tromer, a research scientist at Columbia. Tromer notes, however, that the team often tests their techniques on consumer hardware that's more vulnerable than stripped-down machines built for high security purposes. Still, they get impressive results. "Within this game, answering this question of whether you can form an effective air gap to prevent intentional exfiltration, they’ve made a resounding case for the negative."

A Magnetic Houdini

On Wednesday, Guri's Ben-Gurion team revealed a new technique they call MAGNETO, which Guri describes as the most dangerous yet of the dozen covert channels they've developed over the last four years. By carefully coordinating operations on a computer's processor cores to create certain frequencies of electrical signals, their malware can electrically generate a pattern of magnetic forces powerful enough to carry a small stream of information to nearby devices.

The team went so far as to built an Android app they call ODINI, named for the escape artist Harry Houdini, to catch those signals using a phone's magnetometer, the magnetic sensor that enables its compass and remains active even when the phone is in airplane mode. Depending on how close that smartphone "bug" is to the target air-gapped computer, the team could exfiltrate stolen data at between one and 40 bits a second—even at the slowest rate, fast enough to steal a password in a minute, or a 4096-bit encryption key in a little over an hour, as shown in the video below:

Plenty of other electromagnetic covert channel techniques have in the past used the radio signals generated by computers' electromagnetism to spy on their operations—the NSA's decades-old implementation of the technique, which the agency called Tempest, has even been declassified. But in theory, the radio signals on which those techniques depend would be blocked by the metal shielding of Faraday cages around computers, or even entire Faraday rooms used in some secure environments.

Guri's technique, by contrast, communicates not via electromagnetically induced radio waves but with strong magnetic forces that can penetrate even those Faraday barriers, like metal-lined walls or a smartphone kept in a Faraday bag. "The simple solution to other techniques was simply to put the computer in a Faraday cage and all the signals are jailed," Guri says. "We've shown it doesn’t work like that."

Secret Messages, Drones, and Blinking Lights

For Guri, that Faraday-busting technique caps off an epic series of data heist tricks, some of which he describes as far more "exotic" than his latest. The Ben-Gurion team started, for instance, with a technique called AirHopper, which used a computer's electromagnetism to transmit FM radio signals to a smartphone, a kind of modern update to the NSA's Tempest technique. Next, they proved with a tool called BitWhisper that the heat generated by a piece of malware manipulating a computer's processor can directly—if slowly—communicate data to adjacent, disconnected computers.

In 2016, his team switched to acoustic attacks, showing that they could use the noise generated by a hard drive's spinning or a computer's internal fan to send 15 to 20 bits a minute to a nearby smartphone. The fan attack, they show in the video below, works even when music is playing nearby:

More recently, Guri's team began playing with light-based exfiltration. Last year, they published papers on using the LEDs of computers and routers to blink out Morse-code like messages, and even used the infrared LEDs on surveillance cameras to transmit messages that would be invisible to humans. In the video below, they show that LED-blinked message being captured by a drone outside a facility's window. And compared to previous methods, that light-based transmission is relatively high bandwidth, sending a megabyte of data in a half an hour. If the exfiltrator is willing to blink the LED at a slightly slower rate, the malware can even send its signals with flashes so fast they're undetectable for human eyes.

Guri says he remains so fixated on the specific challenge of air gap escapes in part because it involves thinking creatively about how the mechanics of every component of a computer can be turned into a secret beacon of communication. "It goes way beyond typical computer science: electrical engineering, physics, thermodynamics, acoustic science, optics," he says. "It requires thinking 'out of the box,' literally."

And the solution to the exfiltration techniques he and his team have demonstrated from so many angles? Some of his techniques can be blocked with simple measures, from more shielding to greater amounts of space between sensitive devices to mirrored windows that block peeping drones or other cameras from capturing LED signals. The same sensors in phones that can receive those sneaky data transmissions can also be used to detect them. And any radio-enabled device like a smartphone, Guri warns, should be kept as far as possible from air-gapped devices, even if those phones are carefully stored in a Faraday bag.

But Guri notes that some even more "exotic" and science fictional exfiltration methods may not be so easy to prevent in the future, particularly as the internet of things becomes more intertwined with our daily lives. What if, he speculates, it's possible to squirrel away data in the memory of a pacemaker or insulin pump, using the radio connections those medical devices use for communications and updates? "You can't tell someone with a pacemaker not to go to work," Guri says.

An air gap, in other words, may be the best protection that the cybersecurity world can offer. But thanks to the work of hackers like Guri—some with less academic intentions—that space between our devices may never be entirely impermeable again.

Gap Attacks

#####EOF##### A 1.3-Tbs DDoS Hit GitHub, the Largest Yet Recorded | WIRED
GitHub Survived the Biggest DDoS Attack Ever Recorded

GitHub Survived the Biggest DDoS Attack Ever Recorded

Getty Images

GitHub Survived the Biggest DDoS Attack Ever Recorded

Getty Images

On Wednesday, at about 12:15 pm EST, 1.35 terabits per second of traffic hit the developer platform GitHub all at once. It was the most powerful distributed denial of service attack recorded to date—and it used an increasingly popular DDoS method, no botnet required.

GitHub briefly struggled with intermittent outages as a digital system assessed the situation. Within 10 minutes it had automatically called for help from its DDoS mitigation service, Akamai Prolexic. Prolexic took over as an intermediary, routing all the traffic coming into and out of GitHub, and sent the data through its scrubbing centers to weed out and block malicious packets. After eight minutes, attackers relented and the assault dropped off.

The scale of the attack has few parallels, but a massive DDoS that struck the internet infrastructure company Dyn in late 2016 comes close. That barrage peaked at 1.2 terabits per second and caused connectivity issues across the US as Dyn fought to get the situation under control.

“We modeled our capacity based on fives times the biggest attack that the internet has ever seen,” Josh Shaul, vice president of web security at Akamai told WIRED hours after the GitHub attack ended. “So I would have been certain that we could handle 1.3 Tbps, but at the same time we never had a terabit and a half come in all at once. It’s one thing to have the confidence. It’s another thing to see it actually play out how you’d hope."

Real-time traffic from the DDoS attack.
Akamai

Akamai defended against the attack in a number of ways. In addition to Prolexic's general DDoS defense infrastructure, the firm had also recently implemented specific mitigations for a type of DDoS attack stemming from so-called memcached servers. These database caching systems work to speed networks and websites, but they aren't meant to be exposed on the public internet; anyone can query them, and they'll likewise respond to anyone. About 100,000 memcached servers, mostly owned by businesses and other institutions, currently sit exposed online with no authentication protection, meaning an attacker can access them and send them a special command packet that the server will respond to with a much larger reply.

Unlike the formal botnet attacks used in large DDoS efforts, like against Dyn and the French telecom OVH, memcached DDoS attacks don't require a malware-driven botnet. Attackers simply spoof the IP address of their victim and send small queries to multiple memcached servers—about 10 per second per server—that are designed to elicit a much larger response. The memcached systems then return 50 times the data of the requests back to the victim.

Known as an amplification attack, this type of DDoS has shown up before. But as internet service and infrastructure providers have seen memcached DDoS attacks ramp up over the last week or so, they've moved swiftly to implement defenses to block traffic coming from memcached servers.

"Large DDoS attacks such as those made possible by abusing memcached are of concern to network operators," says Roland Dobbins, a principal engineer at the DDoS and network-security firm Arbor Networks who has been tracking the memcached attack trend. "Their sheer volume can have a negative impact on the ability of networks to handle customer internet traffic."

The infrastructure community has also started attempting to address the underlying problem, by asking the owners of exposed memcached servers to take them off the internet, keeping them safely behind firewalls on internal networks. Groups like Prolexic that defend against active DDoS attacks have already added or are scrambling to add filters that immediately start blocking memcached traffic if they detect a suspicious amount of it. And if internet backbone companies can ascertain the attack command used in a memcached DDoS, they can get ahead of malicious traffic by blocking any memcached packets of that length.

"We are going to filter that actual command out so no one can even launch the attack," says Dale Drew, chief security strategist at the internet service provider CenturyLink. And companies need to work quickly to establish these defenses. "We’ve seen about 300 individual scanners that are searching for memcached boxes, so there are at least 300 bad guys looking for exposed servers," Drew adds.

"It’s one thing to have the confidence. It’s another thing to see it actually play out how you’d hope."

Josh Shaul, Akamai

Most of the memcached DDoS attacks CenturyLink has seen top out at about 40 to 50 gigabits per second, but the industry had been increasingly noticing bigger attacks up to 500 gbps and beyond. On Monday, Prolexic defended against a 200 gbps memcached DDoS attack launched against a target in Munich.

Wednesday's onslaught wasn't the first time a major DDoS attack targeted GitHub. The platform faced a six-day barrage in March 2015, possibly perpetrated by Chinese state-sponsored hackers. The attack was impressive for 2015, but DDoS techniques and platforms—particularly Internet of Things–powered botnets—have evolved and grown increasingly powerful when they’re at their peak. To attackers, though, the beauty of memcached DDoS attacks is there's no malware to distribute, and no botnet to maintain.

The web monitoring and network intelligence firm ThousandEyes observed the GitHub attack on Wednesday. "This was a successful mitigation. Everything transpired in 15 to 20 minutes," says Alex Henthorne-Iwane, vice president of product marketing at ThousandEyes. "If you look at the stats you’ll find that globally speaking DDoS attack detection alone generally takes about an hour plus, which usually means there’s a human involved looking and kind of scratching their head. When it all happens within 20 minutes you know that this is driven primarily by software. It’s nice to see a picture of success."

GitHub continued routing its traffic through Prolexic for a few hours to ensure that the situation was resolved. Akamai's Shaul says he suspects that attackers targeted GitHub simply because it is a high-profile service that would be impressive to take down. The attackers also may have been hoping to extract a ransom. "The duration of this attack was fairly short," he says. "I think it didn’t have any impact so they just said that’s not worth our time anymore."

Until memcached servers get off the public internet, though, it seems likely that attackers will give a DDoS of this scale another shot.

DDoS R Us

#####EOF##### WIRED Videos Skip to main content

WIRED25

Events Amazon CEO Jeff Bezos spoke with WIRED’s Steven Levy as part of WIRED25, WIRED’s 25th anniversary celebration in San Francisco.
Events Instagram Cofounder Kevin Systrom spoke with WIRED's Lauren Goode as part of WIRED25, WIRED’s 25th anniversary celebration in San Francisco.
Business Twitter and Square Cofounder and CEO Jack Dorsey spoke with WIRED’s Editor-in-Chief Nicholas Thompson as part of WIRED25, WIRED’s 25th anniversary celebration in San Francisco.
Events Salesforce Chairman and Co-CEO Marc Benioff spoke with WIRED’s Adam Rogers as part of WIRED25, WIRED’s 25th anniversary celebration in San Francisco.
Business Google CEO Sundar Pichai spoke with WIRED’s Steven Levy as part of WIRED25, WIRED’s 25th anniversary celebration in San Francisco.
Business Microsoft CEO Satya Nadella and Chief Accessibility Officer Jenny Lay-Flurrie spoke with WIRED’s Editor-in-Chief Nicholas Thompson as part of WIRED25, WIRED’s 25th anniversary celebration in San Francisco.
Business WIRED editor-in-chief Nicholas Thompson spoke with Jeff Weiner, CEO of LinkedIn about the future of work at WIRED's 25th anniversary celebration in San Francisco.
Business WIRED editor-in-chief Nicholas Thompson spoke with Stacy Brown-Philpot, CEO of TaskRabbit, about the future of work at WIRED's 25th anniversary celebration in San Francisco.
More
#####EOF##### GitHub Atom's Code-Editor Nerds Take Over Their Universe | WIRED
GitHub Atom's Code-Editor Nerds Take Over Their Universe

GitHub Atom's Code-Editor Nerds Take Over Their Universe

GitHub Atom's Code-Editor Nerds Take Over Their Universe

Chris Wanstrath was in love with Emacs.

Emacs is a nearly 40-year-old computer program that lets you, well, edit text. It's a way of tinkering with obscure files buried inside a computer's operating system or, better yet, building new computer programs. Wanstrath fell in love with it because it offered a way of building itself. "It's the Holy Grail of editors. It's essentially written in itself," he says. "You can build a plug-in for the editor that can do anything the editor is capable of doing."

If you ply your trade outside the world of computing, that may sound odd. "You don't get this in too many other professions," Wanstrath says, "though there may be some carpenters who use hammers to build hammers." If you're a coder, however, this kind of recursiveness is commonplace—and extremely useful. It can make coding easier. And more powerful. "I'm like a lot of programmers," Wanstrath says. "I love the idea that the tools you use every day can be used to customize and influence the tools you use every day."

GitHub

But as much as he loved Emacs, Wanstrath also knew it was flawed. If you want to rebuild Emacs with Emacs, you have to use an Emacs-ified version of Lisp, an older programming language that isn't as widely used as more modern languages. "The rise and fall of Lisp has already happened," Wanstrath says. So, in the summer of 2008, Wanstrath and started building an Emacs for the modern world, an editor that offered a way of building itself via JavaScript, the lingua franca of the worldwide web.

Somewhere along the way, he got sidetracked. Wanstrath is the co-founder and CEO of GitHub, and in those days, he was busy building the company into the center of the coding universe. But seven years after he first cooked up the idea for his new-age code editor, it has arrived. It's called Atom, and today, Wanstrath and GitHub are set to unveil version 1.0 at a conference in Tennessee. Atom has reached the point, he says, where anyone can use it to build Atom.

Building Stuff for Building Stuff

Of course, this being the age of open source software, people are already building with Atom. GitHub open sourced an early "beta" version of Atom about a year ago, sharing the underlying code with the rest of the 'net, and since then, the tool has been downloaded 1.3 million times, with over 350,000 people using the thing on a regular basis.

At Facebook, developers have already used Atom to build their own Atom, a text editor called Nuclide that's tailored for use with the unusually enormous amount of code that runs the Facebook empire. Others are building all sorts of new plug-ins for Atom, including one that does auto-completes code as you type and another that scans code for errors. A company called Nylas is even transforming Wanstrath's editor into an email client.

Atom is a symbol for a changing software world. In the past, businesses would use what they could buy from companies like Microsoft and Oracle and Apple. And that was that. Now, with the rise of open source, businesses can build exactly what they need, rather than just relying on what's available. With the tools they use to build stuff, they even build better tools for building stuff. All this lets businesses evolve in bigger ways, at a faster pace.

Yes, so many other editors provide a way of customizing what they do, from Vim to Notepad. But typically, they're written in a language like C (which lets them operate at speed), and you customize them using some sort of simple scripting language (which lets you operate at speed). As Wanstrath explains, this limits what you can customize. "You don't have access to the engine." But Atom, he says, is different: Everything is built with Javascript.

Wanstrath acknowledges that he's an "editor nerd." But that goes for so many coders. And whether you're an editor nerd or not, the point is that you can use Atom to build pretty much whatever editor you want, using a familiar, relatively simple language.

Flexible Enough for Facebook

Previously, Facebook used Apple's Xcode software to build its big blue iPhone app. But the code for the app grew so large—apparently, Facebook's code base is nearly as big as Microsoft's Windows operating system—that Xcode couldn't really handle it. Across the company, it would crash about 50 times a day. "Xcode didn't scale for our needs," says Facebook's Mike Bolin. "It scales for small developer teams, even medium-sized teams. But we're off the charts." So the company built its own editor with Atom.

Atom was particularly useful, Bolin says, because they could customize it with Javascript and other web technologies. That meant practically any Facebook developer could hack on Nuclide. "It has a making-a-web-page feel to it," he explains.

In turn, Facebook has now open sourced Nuclide, and the process can repeat—ad infinitum. Atom is an editor that lets you build an editor that lets you build an editor. And so on. And so forth. The upshot is that Facebook can feed itself by giving something away. Others, outside of the company, can help improve what it has built.

The GitHub Way

Much the same goes for GitHub. In giving Atom away, Wanstrath and GitHub can move toward their own goals. An open source Atom is a better Atom. "It's one thing for me to be able to hack my editor," Wanstrath says. "But what's way more powerful is that I can use other plug-ins that other people have written." What's more, Atom dovetails with GitHub, the primary repository for open source code on the 'net. The more people use Atom—and its many incarnations—the more they use GitHub.

“Facebook is an example of that," Wanstrath explains. "We released Atom. They built this Nuclide thing on top of it. And ultimately, all those people are contributing to the community, contributing to GitHub."

Yes, the software is free. But for both Facebook and GitHub, this free software can ultimately feed the bottom line. If Facebook can improve its iPhone app at a faster pace, more people will use it, and that means the company can serve them more ads. If more companies use GitHub, more will pay for private code repositories or shell out for GitHub Enterprise, a way of running the service on your own machines.

That may sound like a stretch. It may sound idealistic. But it's the way things work in the modern software world. This is how Facebook and GitHub operate, and both are enormously successful companies. The editor nerds have coded their way to the center of the software universe.

#####EOF##### #####EOF#####